A Trick of the Mind by Daniel Yon How the Brain Invents Your Reality
What's it about?
A Trick of the Mind (2025) asks a provocative question: what if the world you experience is less reality itself and more a story your brain invents? It makes a strong case for how our minds act like scientists – predicting and testing what we see and believe. It also shows how this process can sometimes lead to brilliant ideas while other times it can trap us in unhealthy distortions.
Reality is not a single, solid thing – it’s layered. Philosopher Karl Popper suggested we actually live in three overlapping worlds: the material world of matter and molecules, the mental world of people and their hidden thoughts, and the world of ideas – our languages, myths, and paradigms that live beyond any one individual. In each of these realms we can see how the brain acts like a scientist, forever building and testing theories to make sense of them. What we see, hear, believe, and imagine doesn’t come to us as raw data – it’s filtered through the predictions and models our brains are constantly generating, often without us realizing it.
This lesson will break all of this down. We’ll start by exploring how the brain builds theories just to perceive the physical world around us. Then we’ll see how this same machinery lets us navigate the hidden world of other minds and reflect inward to build a model of ourselves. Finally, we’ll zoom out to the world of ideas, tracing how curiosity, creativity, and paradigm shifts emerge from the same predictive processes. Along the way, we’ll uncover both the power and the pitfalls of a brain that invents its own reality.
For centuries, we’ve been quite willing to draw a sharp line between perception and hallucination. Hearing voices or seeing visions has been taken as proof that someone has slipped from the world of reality into illness. But when we step back and ask how perception actually works, that line becomes blurrier.
After all, our brains are not windows to the world. All they get are fragments of information – measurements of light, sound, touch, taste, and smell. And from those scraps of data, the brain must construct the vivid world we experience.
Consider vision. When you look at a loved one’s face, what falls onto your retina is only a flat, two-dimensional pattern of light and shadow. From that shadow alone, there are countless possible objects that could have produced it. The brain’s task is what engineers call an “ill-posed inverse problem”: trying to reconstruct a three-dimensional reality from incomplete, uncertain data.
The way around this problem is to become a better guesser – to act like a scientist. Modern neuroscience frames perception as hypothesis testing. Higher regions send predictions down, lower regions send evidence up, and what we see is the negotiated truce. To put it another way: prior outcomes meet new data, beliefs get updated.
What holds true for vision holds true for language. Speech is a continuous, messy stream of noises, yet brains carve it into understandable words by forecasting what likely comes next. If someone is mumbling or we only catch fragments over a bad phone connection, we can still suss it out by filling in the blanks. Still, when we lean too hard on what we’re familiar with, it can lead to mishearings, like when you think he said “excuse me while I kiss this guy” rather than “excuse me while I kiss the sky.”
And in more serious cases, when predictions outweigh incoming evidence, hallucinations can emerge. Research shows that people prone to hallucinations often rely more heavily on prior knowledge when interpreting ambiguous sights or sounds. Their brains don’t just register what is there; they actively fill in what they expect to be there, sometimes so powerfully that expectation becomes experience.
The task of a healthy mind is to keep that balance between expectations and experience nimble – to use predictions to stabilize our understandings, but to let evidence tug them in a new direction when appropriate. In the sections ahead, we’ll continue to explore this balance, first by exploring the models of cause and effect that are constantly in motion.
To put it simply, perception just isn’t enough. The brain also needs a manual for doing – taking action. So, again, much like a scientist, it builds models of cause and effect, which is essentially a model of you. Each action you take – flip a switch, crack a joke, move a finger – is a tiny experiment, and the results feed back into a theory of what you can influence.
And yet, just as in the laboratory, our hypotheses can backfire. Sometimes we’ll exude confidence in a situation that goes completely out of our control. Other times we’ll feel completely helpless in circumstances that are completely in our control. How often have you pressed an elevator button with increased frequency or pressure even though you know full well it’s not going to have any effect?
Social context also plays a big role in the actions we take as well – particularly with how truthful we are. For example, when we think that people aren’t listening to us, we tend to be dismissive about potential pitfalls when promoting our ideas. On the flip side, when we feel like we have a lot of influence we’ll be more hesitant and cautious – perhaps out of fear of losing that influence.
There have also been a number of studies about our willingness to inflict pain on others, which also will change based on circumstances. How many painful electric shocks would you give a stranger in order to hold on to a stack of free money? Experiments suggest that, when you have the money in your hand, you’ll be more willing to inflict pain than you were under hypothetical circumstances.
When action and outcome feel closer together and your agency is voluntary, we’re dealing with a neurological situation called intentional binding. Interestingly enough, all the signals you find under those circumstances change when a third party is ordering you to do something. Those orders cause the brain signals to shift, as if the mind is offloading responsibility.
At all times your sense of control is a living hypothesis. It updates with each experiment you run, each belief you hold, and each authority you obey. At any moment you can find yourself in a tangled situation, where the data is misinterpreted and the actions come out wrong. But by noticing how our models tilt perception of cause and effect, we can recalibrate – seeing more clearly where our actions truly matter, and where they don’t.
Let’s move away from our internal inspections and look outward to others. Every day we’re attempting to decode other people’s intentions and emotions, even when their signals are ambiguous. A colleague’s silence, a friend’s sudden exit – are they annoyed? Distracted? What’s going on?
To cope, the brain runs what’s called a “Galileo maneuver”: it first points its instruments inward, using firsthand knowledge of how feelings drive our own movements, then projects those models outward to interpret others. Motion can tell us a lot: buoyant steps often signal joy, heavy movements sadness, sharp bursts signal anger.
But the Galileo maneuver has its flaws since we’re going to be most accurate with people who move like us. A youngster might look at the heavier, slower movements of an older person and mistake these signals as sadness, for example. The same mistakes can occur when two people from different cultures or neurotypes meet each other for the first time.
When a person’s gestures and tempos are different from our own, it can lead to what’s known as a “double empathy problem.” In one revealing study, researchers used simple animated drawings and asked participants to describe what was happening. Neurotypical researchers observed that autistic participants struggled to explain the movements of the figures. But when the roles were reversed – when autistic participants created their own animated stories – the researchers found themselves unable to follow the narrative.
It’s perhaps unsurprising, then, that people with richer and more varied social experiences develop a deeper well of knowledge for recognizing different traits and expressions. Just like machine-learning algorithms, our mental models absorb whatever data we feed them. Accuracy improves when we broaden those inputs—by seeking diverse experiences, listening to different “movement vocabularies,” and continuously updating our mental maps with better data. The more varied the worlds we explore, the more gracefully we navigate the orbits of other minds.
If we think of our brains as scientists running constant experiments, it makes sense that confidence grows with successful predictions. Research shows that early experiences of success and failure shape our internal models in lasting ways. Wins give us a boost and keep us engaged, while losses can cause us to shrink away. Yet persistence pays off: people who keep trying after setbacks often catch up with the so-called early winners.
Building a self-model that emphasizes perseverance can require some effort. As we’ve touched on, people routinely misjudge their own behavior. Our self-monitoring system, known to scientists as metacognition, or thinking about thinking, has to deal with a lot of messy signals. It can be a challenge to know what to trust and what to ignore.
What do we believe about ourselves, and why do we believe it? And when we receive new information that may challenge those beliefs, when is the right time to be stubborn and when is it appropriate to reconsider things?
The process is rational in spirit – yesterday’s reliability forecasts tomorrow’s. But it can skew. Early failures seed underconfidence, discouraging new attempts. But what is needed, as with any system, is new data. Without new data, the pessimistic loop closes in on itself. In depression, that loop hardens, draining effort even on good days.
It helps to understand that expectations don’t just steer choices; they shape how experience feels. It’s similar to the glass half-empty or half-full scenario. Two people can look at an identical visual input and see different things based on their expectations. And while too little confidence can keep us stuck, too much brings its own distortions – especially in how we handle new evidence. Overconfidence pushes us to favor confirmation over contradiction, reinforcing what we already believe instead of helping us learn.
Like everything else, self-belief is a living model – constantly shaped by success and failure, tuned by expectation, and sometimes trapped by its own predictions. The remedy is to keep gathering fresh experiences, notice when confidence distorts perception, and allow enough trials for the model to update itself. The good news is that, as we’ll see in the next section, we’re built to enjoy learning.
Dopamine has a certain reputation these days. Most of us have heard about this neurotransmitter and think of it as the brain’s pleasure chemical. But that’s only a small part of the story. The joy we feel when we get a hit of dopamine isn’t just related to the pleasures of hedonistic pursuits. It also comes down to our brains being wired for curiosity, and the joy of discovering and learning. One of our defining characteristics is our innate drive to chase understanding even when it has no obvious payoff. Why else build particle accelerators or spend evenings lost in philosophy?
What gets the pleasure center of the brain really going is surprise. Sure, we all like getting a reward, but the real turn on is found in the gap between expectation and reality – when there’s a prediction error and the reward is unexpected. If we can accurately predict when the prize will arrive time and time again, it becomes boring – we lose interest.
Scientists call it the “hedonic treadmill.” Gains feel good at first, then expectations catch up and the high fades. But this applies as much to money and food as is does to information. Knowledge itself has become delicious. It’s why learning can feel more rewarding than winning. In a study of people gambling on car races, happiness rose not with the biggest payouts but with the most informative updates. If people learned something during the race – even if they lost their best, they would come away happier than they were before.
Surprise itself is pleasurable. Eureka moments feel good because the brain is wired to reward updates. And that’s the root of wonder. From art to science to religion, wonder is the spark. The great thing is, unlike money or food, this currency doesn’t deplete – curiosity replenishes itself, each answer birthing new questions.
Earlier on, we mentioned how the models in our brain were similar to the models used by machine learning. The better the data sets, the better the output. And as artificial intelligence continues to improve, it may lead you to wonder, if machines can stitch together fluent sentences by tracking patterns, are our brains doing something similar? Are we also just prediction machines at heart?
It’s true that we put a lot of emphasis on language. When Google engineer Blake Lemoine had his first breakthrough with LaMDA, a large language model, he believed it had become sentient, simply because it was talking to him in such a clear and convincing way. But even though others were quick to argue that these systems are just elaborate pattern-matchers, their output can feel eerily human.
It was enough to unsettle the adherents to analytic philosophers like Noam Chomsky and Rene Descartes, who believe that language was uniquely human, irreducible to mechanism. If both machines and brains rely on predictive tricks, maybe originality isn’t about escaping patterns – maybe it emerges from them. Maybe creativity and originality is a programmable process. Generate endless variants, most ordinary, some surprising, then sift and select. No magical genius necessary.
Unlike machines, however, our brains filter patterns through constellations of memories, beliefs, and cultures. Ideas mutate as they pass between people. The psychologist Donald Campbell has argued that universities and organizations will generate more creative breakthroughs when they stop creating silos and start overlapping their expertise.
This mixing and mutating of ideas and perspectives is something that humans are uniquely capable of. Picasso’s Cubism, for instance, likely grew from mixing his fascination with African masks with his background in European painting. Creativity isn’t plucking ideas from thin air. It’s a representation of diversity.
Machines may replicate patterns, but human originality comes from embedding them in living, social minds – minds that are always updating, always reweaving.
Throughout the lesson we’ve followed the “scientist in your skull,” watching how it steadies perception with theories. But models wobble. When the world grows volatile, the same machinery that guides us can lure us toward odd beliefs – including conspiracies. The issue here isn’t strange minds, but uncertain times. And when uncertain times lead to paradigm shifts, our models can really get unbalanced.
When our everyday world shifts from “normal” to being full of anomalies, our confidence cracks. Our expectations and predictions feel off. A paradigm shift forces the question of trust that our brain faces daily. Do we trust old experiences or new data? When should we remain stubborn in our beliefs, treating anomalies as chaotic noise and coincidence, and when should we be flexible, and update our beliefs accordingly?
Meta-learning helps solve this dilemma. Meta-learning is essentially learning how much to learn. It’s about paying attention to stability versus volatility. One bad espresso at a cafΓ© that’s always reliable? Probably a fluke. But if staff turnover is constant, that same sip signals real change. By focusing on stability versus volatility, we can tune how quickly we need to update.
Real-world upheavals – pandemics, social unrest – push minds toward high learning rates. In those states, even flimsy evidence can seep in, which helps explain why conspiracies spike in unstable eras.
Chemistry plays a role, too. Noradrenaline, released from the locus coeruleus, signals volatility and can make us latch on to flimsy evidence. Interestingly enough, when people take beta-blockers like propranolol, it can dampen the effect, make the world feel steadier, and lead toward more level-headed decision making. On the other hand, stimulants like Ritalin can boost production of noradrenaline, causing us to switch our beliefs faster.
So what’s best: rigid theories or constant flux? Neither. It depends on the circumstances. Filters are not flaws – they’re how we see – but they must be revisable. The art is holding theories firmly in calm weather and loosening them when the winds shift.
It’s a balance. To grasp the mind fully, we need both microscopes and wide-angle views, to be open to the ideas of artists as well as scientists. The good news is that the scientist in your head is curious, adaptable, and capable of shifting when anomalies appear. You can also think in terms of an author, because your mind works in drafts. Today’s best draft may feel complete, but tomorrow’s data can rewrite it. And that’s the lasting message: your reality is a collaboration between world and mind, and like everything else, you’re a work in progress.
The main takeaway of this lesson to A Trick of the Mind by Daniel Yon is that reality is not passively absorbed but actively constructed by our brains, which function like scientists – building, testing, and revising theories to make sense of the material world, the minds of others, and the world of ideas. These predictive models allow us to perceive, communicate, and create, but they also leave us vulnerable to illusions, biases, misplaced confidence, and even conspiracy thinking when the world grows volatile. From dopamine-driven curiosity to metacognitive self-reflection, from the sparks of originality to the shifts of entire paradigms, our thoughts, beliefs, and perceptions are shaped as much by past experience and expectation as by the raw data of the present. Ultimately, our minds are works in progress, forever revising their models of reality, open to surprise, and always capable of change.
A Trick of the Mind (2025) asks a provocative question: what if the world you experience is less reality itself and more a story your brain invents? It makes a strong case for how our minds act like scientists – predicting and testing what we see and believe. It also shows how this process can sometimes lead to brilliant ideas while other times it can trap us in unhealthy distortions.
Reality is not a single, solid thing – it’s layered. Philosopher Karl Popper suggested we actually live in three overlapping worlds: the material world of matter and molecules, the mental world of people and their hidden thoughts, and the world of ideas – our languages, myths, and paradigms that live beyond any one individual. In each of these realms we can see how the brain acts like a scientist, forever building and testing theories to make sense of them. What we see, hear, believe, and imagine doesn’t come to us as raw data – it’s filtered through the predictions and models our brains are constantly generating, often without us realizing it.
This lesson will break all of this down. We’ll start by exploring how the brain builds theories just to perceive the physical world around us. Then we’ll see how this same machinery lets us navigate the hidden world of other minds and reflect inward to build a model of ourselves. Finally, we’ll zoom out to the world of ideas, tracing how curiosity, creativity, and paradigm shifts emerge from the same predictive processes. Along the way, we’ll uncover both the power and the pitfalls of a brain that invents its own reality.
For centuries, we’ve been quite willing to draw a sharp line between perception and hallucination. Hearing voices or seeing visions has been taken as proof that someone has slipped from the world of reality into illness. But when we step back and ask how perception actually works, that line becomes blurrier.
After all, our brains are not windows to the world. All they get are fragments of information – measurements of light, sound, touch, taste, and smell. And from those scraps of data, the brain must construct the vivid world we experience.
Consider vision. When you look at a loved one’s face, what falls onto your retina is only a flat, two-dimensional pattern of light and shadow. From that shadow alone, there are countless possible objects that could have produced it. The brain’s task is what engineers call an “ill-posed inverse problem”: trying to reconstruct a three-dimensional reality from incomplete, uncertain data.
The way around this problem is to become a better guesser – to act like a scientist. Modern neuroscience frames perception as hypothesis testing. Higher regions send predictions down, lower regions send evidence up, and what we see is the negotiated truce. To put it another way: prior outcomes meet new data, beliefs get updated.
What holds true for vision holds true for language. Speech is a continuous, messy stream of noises, yet brains carve it into understandable words by forecasting what likely comes next. If someone is mumbling or we only catch fragments over a bad phone connection, we can still suss it out by filling in the blanks. Still, when we lean too hard on what we’re familiar with, it can lead to mishearings, like when you think he said “excuse me while I kiss this guy” rather than “excuse me while I kiss the sky.”
And in more serious cases, when predictions outweigh incoming evidence, hallucinations can emerge. Research shows that people prone to hallucinations often rely more heavily on prior knowledge when interpreting ambiguous sights or sounds. Their brains don’t just register what is there; they actively fill in what they expect to be there, sometimes so powerfully that expectation becomes experience.
The task of a healthy mind is to keep that balance between expectations and experience nimble – to use predictions to stabilize our understandings, but to let evidence tug them in a new direction when appropriate. In the sections ahead, we’ll continue to explore this balance, first by exploring the models of cause and effect that are constantly in motion.
To put it simply, perception just isn’t enough. The brain also needs a manual for doing – taking action. So, again, much like a scientist, it builds models of cause and effect, which is essentially a model of you. Each action you take – flip a switch, crack a joke, move a finger – is a tiny experiment, and the results feed back into a theory of what you can influence.
And yet, just as in the laboratory, our hypotheses can backfire. Sometimes we’ll exude confidence in a situation that goes completely out of our control. Other times we’ll feel completely helpless in circumstances that are completely in our control. How often have you pressed an elevator button with increased frequency or pressure even though you know full well it’s not going to have any effect?
Social context also plays a big role in the actions we take as well – particularly with how truthful we are. For example, when we think that people aren’t listening to us, we tend to be dismissive about potential pitfalls when promoting our ideas. On the flip side, when we feel like we have a lot of influence we’ll be more hesitant and cautious – perhaps out of fear of losing that influence.
There have also been a number of studies about our willingness to inflict pain on others, which also will change based on circumstances. How many painful electric shocks would you give a stranger in order to hold on to a stack of free money? Experiments suggest that, when you have the money in your hand, you’ll be more willing to inflict pain than you were under hypothetical circumstances.
When action and outcome feel closer together and your agency is voluntary, we’re dealing with a neurological situation called intentional binding. Interestingly enough, all the signals you find under those circumstances change when a third party is ordering you to do something. Those orders cause the brain signals to shift, as if the mind is offloading responsibility.
At all times your sense of control is a living hypothesis. It updates with each experiment you run, each belief you hold, and each authority you obey. At any moment you can find yourself in a tangled situation, where the data is misinterpreted and the actions come out wrong. But by noticing how our models tilt perception of cause and effect, we can recalibrate – seeing more clearly where our actions truly matter, and where they don’t.
Let’s move away from our internal inspections and look outward to others. Every day we’re attempting to decode other people’s intentions and emotions, even when their signals are ambiguous. A colleague’s silence, a friend’s sudden exit – are they annoyed? Distracted? What’s going on?
To cope, the brain runs what’s called a “Galileo maneuver”: it first points its instruments inward, using firsthand knowledge of how feelings drive our own movements, then projects those models outward to interpret others. Motion can tell us a lot: buoyant steps often signal joy, heavy movements sadness, sharp bursts signal anger.
But the Galileo maneuver has its flaws since we’re going to be most accurate with people who move like us. A youngster might look at the heavier, slower movements of an older person and mistake these signals as sadness, for example. The same mistakes can occur when two people from different cultures or neurotypes meet each other for the first time.
When a person’s gestures and tempos are different from our own, it can lead to what’s known as a “double empathy problem.” In one revealing study, researchers used simple animated drawings and asked participants to describe what was happening. Neurotypical researchers observed that autistic participants struggled to explain the movements of the figures. But when the roles were reversed – when autistic participants created their own animated stories – the researchers found themselves unable to follow the narrative.
It’s perhaps unsurprising, then, that people with richer and more varied social experiences develop a deeper well of knowledge for recognizing different traits and expressions. Just like machine-learning algorithms, our mental models absorb whatever data we feed them. Accuracy improves when we broaden those inputs—by seeking diverse experiences, listening to different “movement vocabularies,” and continuously updating our mental maps with better data. The more varied the worlds we explore, the more gracefully we navigate the orbits of other minds.
If we think of our brains as scientists running constant experiments, it makes sense that confidence grows with successful predictions. Research shows that early experiences of success and failure shape our internal models in lasting ways. Wins give us a boost and keep us engaged, while losses can cause us to shrink away. Yet persistence pays off: people who keep trying after setbacks often catch up with the so-called early winners.
Building a self-model that emphasizes perseverance can require some effort. As we’ve touched on, people routinely misjudge their own behavior. Our self-monitoring system, known to scientists as metacognition, or thinking about thinking, has to deal with a lot of messy signals. It can be a challenge to know what to trust and what to ignore.
What do we believe about ourselves, and why do we believe it? And when we receive new information that may challenge those beliefs, when is the right time to be stubborn and when is it appropriate to reconsider things?
The process is rational in spirit – yesterday’s reliability forecasts tomorrow’s. But it can skew. Early failures seed underconfidence, discouraging new attempts. But what is needed, as with any system, is new data. Without new data, the pessimistic loop closes in on itself. In depression, that loop hardens, draining effort even on good days.
It helps to understand that expectations don’t just steer choices; they shape how experience feels. It’s similar to the glass half-empty or half-full scenario. Two people can look at an identical visual input and see different things based on their expectations. And while too little confidence can keep us stuck, too much brings its own distortions – especially in how we handle new evidence. Overconfidence pushes us to favor confirmation over contradiction, reinforcing what we already believe instead of helping us learn.
Like everything else, self-belief is a living model – constantly shaped by success and failure, tuned by expectation, and sometimes trapped by its own predictions. The remedy is to keep gathering fresh experiences, notice when confidence distorts perception, and allow enough trials for the model to update itself. The good news is that, as we’ll see in the next section, we’re built to enjoy learning.
Dopamine has a certain reputation these days. Most of us have heard about this neurotransmitter and think of it as the brain’s pleasure chemical. But that’s only a small part of the story. The joy we feel when we get a hit of dopamine isn’t just related to the pleasures of hedonistic pursuits. It also comes down to our brains being wired for curiosity, and the joy of discovering and learning. One of our defining characteristics is our innate drive to chase understanding even when it has no obvious payoff. Why else build particle accelerators or spend evenings lost in philosophy?
What gets the pleasure center of the brain really going is surprise. Sure, we all like getting a reward, but the real turn on is found in the gap between expectation and reality – when there’s a prediction error and the reward is unexpected. If we can accurately predict when the prize will arrive time and time again, it becomes boring – we lose interest.
Scientists call it the “hedonic treadmill.” Gains feel good at first, then expectations catch up and the high fades. But this applies as much to money and food as is does to information. Knowledge itself has become delicious. It’s why learning can feel more rewarding than winning. In a study of people gambling on car races, happiness rose not with the biggest payouts but with the most informative updates. If people learned something during the race – even if they lost their best, they would come away happier than they were before.
Surprise itself is pleasurable. Eureka moments feel good because the brain is wired to reward updates. And that’s the root of wonder. From art to science to religion, wonder is the spark. The great thing is, unlike money or food, this currency doesn’t deplete – curiosity replenishes itself, each answer birthing new questions.
Earlier on, we mentioned how the models in our brain were similar to the models used by machine learning. The better the data sets, the better the output. And as artificial intelligence continues to improve, it may lead you to wonder, if machines can stitch together fluent sentences by tracking patterns, are our brains doing something similar? Are we also just prediction machines at heart?
It’s true that we put a lot of emphasis on language. When Google engineer Blake Lemoine had his first breakthrough with LaMDA, a large language model, he believed it had become sentient, simply because it was talking to him in such a clear and convincing way. But even though others were quick to argue that these systems are just elaborate pattern-matchers, their output can feel eerily human.
It was enough to unsettle the adherents to analytic philosophers like Noam Chomsky and Rene Descartes, who believe that language was uniquely human, irreducible to mechanism. If both machines and brains rely on predictive tricks, maybe originality isn’t about escaping patterns – maybe it emerges from them. Maybe creativity and originality is a programmable process. Generate endless variants, most ordinary, some surprising, then sift and select. No magical genius necessary.
Unlike machines, however, our brains filter patterns through constellations of memories, beliefs, and cultures. Ideas mutate as they pass between people. The psychologist Donald Campbell has argued that universities and organizations will generate more creative breakthroughs when they stop creating silos and start overlapping their expertise.
This mixing and mutating of ideas and perspectives is something that humans are uniquely capable of. Picasso’s Cubism, for instance, likely grew from mixing his fascination with African masks with his background in European painting. Creativity isn’t plucking ideas from thin air. It’s a representation of diversity.
Machines may replicate patterns, but human originality comes from embedding them in living, social minds – minds that are always updating, always reweaving.
Throughout the lesson we’ve followed the “scientist in your skull,” watching how it steadies perception with theories. But models wobble. When the world grows volatile, the same machinery that guides us can lure us toward odd beliefs – including conspiracies. The issue here isn’t strange minds, but uncertain times. And when uncertain times lead to paradigm shifts, our models can really get unbalanced.
When our everyday world shifts from “normal” to being full of anomalies, our confidence cracks. Our expectations and predictions feel off. A paradigm shift forces the question of trust that our brain faces daily. Do we trust old experiences or new data? When should we remain stubborn in our beliefs, treating anomalies as chaotic noise and coincidence, and when should we be flexible, and update our beliefs accordingly?
Meta-learning helps solve this dilemma. Meta-learning is essentially learning how much to learn. It’s about paying attention to stability versus volatility. One bad espresso at a cafΓ© that’s always reliable? Probably a fluke. But if staff turnover is constant, that same sip signals real change. By focusing on stability versus volatility, we can tune how quickly we need to update.
Real-world upheavals – pandemics, social unrest – push minds toward high learning rates. In those states, even flimsy evidence can seep in, which helps explain why conspiracies spike in unstable eras.
Chemistry plays a role, too. Noradrenaline, released from the locus coeruleus, signals volatility and can make us latch on to flimsy evidence. Interestingly enough, when people take beta-blockers like propranolol, it can dampen the effect, make the world feel steadier, and lead toward more level-headed decision making. On the other hand, stimulants like Ritalin can boost production of noradrenaline, causing us to switch our beliefs faster.
So what’s best: rigid theories or constant flux? Neither. It depends on the circumstances. Filters are not flaws – they’re how we see – but they must be revisable. The art is holding theories firmly in calm weather and loosening them when the winds shift.
It’s a balance. To grasp the mind fully, we need both microscopes and wide-angle views, to be open to the ideas of artists as well as scientists. The good news is that the scientist in your head is curious, adaptable, and capable of shifting when anomalies appear. You can also think in terms of an author, because your mind works in drafts. Today’s best draft may feel complete, but tomorrow’s data can rewrite it. And that’s the lasting message: your reality is a collaboration between world and mind, and like everything else, you’re a work in progress.
The main takeaway of this lesson to A Trick of the Mind by Daniel Yon is that reality is not passively absorbed but actively constructed by our brains, which function like scientists – building, testing, and revising theories to make sense of the material world, the minds of others, and the world of ideas. These predictive models allow us to perceive, communicate, and create, but they also leave us vulnerable to illusions, biases, misplaced confidence, and even conspiracy thinking when the world grows volatile. From dopamine-driven curiosity to metacognitive self-reflection, from the sparks of originality to the shifts of entire paradigms, our thoughts, beliefs, and perceptions are shaped as much by past experience and expectation as by the raw data of the present. Ultimately, our minds are works in progress, forever revising their models of reality, open to surprise, and always capable of change.
Comments
Post a Comment