A Field Guide to Lies by Daniel J. Levitin Critical Thinking with Statistics and the Scientific Method
What's it about?
A Field Guide to Lies (2016) is a survival manual for our information-saturated world. With lessons on how to spot misleading statistics, arguments, and reports, its guidance is organized into two key areas: statistical information and faulty arguments. You’ll learn to recognize when numbers are being manipulated, and to avoid falling for logical fallacies in an age where misinformation spreads rapidly.
A Field Guide to Lies
Every day, you’re bombarded with information designed to persuade, inform, or influence you. A news headline claims a shocking statistic. A social media post shares a “groundbreaking” study. In our hyperconnected world, distinguishing truth from manipulation has become both more crucial and more challenging than ever.
The problem isn’t just that bad information exists – it’s that our brains are wired to accept convincing narratives and impressive-looking numbers without much scrutiny. We live in an age where anyone can dress up opinion as fact, where correlation gets mistaken for causation, and where a single cherry-picked study can launch a thousand misleading headlines.
In this lesson, we’ll explore a few of the tools Daniel Levitin proposes as a defense system against misinformation. You’ll learn how to spot misleading statistics, uncover the biases behind studies, and recognize false experts. You’ll also get an understanding of how science really works, and prime yourself against your own logical fallacies.
Let’s get to it.
You’re scrolling through your feed one afternoon when a bold statistic catches your eye: “Our top salesperson made 1,000 sales in a single day!” Your brain immediately thinks, Wow, that’s impressive. But here’s the thing – numbers have a sneaky way of making us believe they’re telling the absolute truth, simply because they look so official and precise.
The reality is far more complex. Behind every statistic sits a human being who gathered, interpreted, and presented that data. And humans, as we know, make mistakes – and lie. This is why developing a habit of doing quick plausibility checks can save you from falling into statistical traps.
Let’s return to that telemarketing claim. If closing a deal takes at least a minute, then you can close 60 deals in one hour. Which means – and this is being generous – an eight-hour workday would max out at 480 sales, assuming the person never took a break and closed every single call. Suddenly, that 1,000-sale claim doesn’t seem so believable, does it?
Now, even when statistics aren’t outright wrong, they can still mislead us in subtle ways. Take averages – those often helpful numbers that compress mountains of information into bite-sized pieces. The most commonly used form of average is called the mean. It’s calculated by summing up all the individual values in a sample and then dividing the sum by the total number of values.
Consider this statistic: “Human beings have one testicle on average.” This is technically true, if we’re calculating the mean of both men and women jointly. But is it telling us anything useful? The problem isn’t with the math – it’s the way the math was applied.
Here’s where averages become truly dangerous: they strip away information about extremes. Death Valley, California, boasts a mean temperature of a pleasant 77 degrees Fahrenheit. Sounds like perfect weather for a vacation, right? But venture there on the wrong day, and you could find yourself battling 134-degree heat or shivering in 15-degree cold. The average completely masks the range.
This same principle applies everywhere – from business reports that use “average customer satisfaction” to hide the fact that half your customers are furious, to salary surveys that obscure massive pay gaps.
So the next time you encounter a statistic, whether it’s in a news article, social media post, or business presentation, pause for a moment. Ask yourself, Does this number actually make sense? What story might it be hiding?
Studies are the go-to weapon for proving a point these days – newspaper articles cite them to support headlines, pharmaceutical companies use them to validate new drugs, and political campaigns wave poll numbers to claim momentum.
But behind all these statistics lies a fundamental challenge: to obtain reliable data, real people need to collect the information. Since you can’t study every single rock in the Atlantic or interview every single person in North America, you need to decide on a sample. That’s when the problem of sample bias creeps in.
Here’s how it works in practice. Say you want to survey San Franciscans about climate change attitudes. You head to Union Square and interview people across different ages, ethnicities, and dress styles, thinking you’ve got a representative sample. Wrong. You’ve already excluded massive chunks of the population: people sick at home, mothers with small children who can’t easily get downtown, night workers sleeping during the day.
Fine, you think – door-to-door surveys will solve this. But if you knock during daytime, you miss everyone working in town. Switch to nighttime, and you exclude party-goers, church attendees, and night-shift workers already at their jobs. Every approach systematically leaves someone out.
Even if you somehow managed to reach a perfect cross-section of people, two more insidious biases are waiting to undermine your results.
First comes participation bias. Not everyone you ask will agree to participate, and their reasons for declining can skew your data in predictable ways. A study about sexual attitudes will likely discourage more prudish people from participating. Political surveys might not attract people with neutral views who find such topics boring or divisive. The volunteers aren’t random – they’re self-selecting based on who cares enough to engage with your particular topic.
Then there’s reporting bias – the gap between what people actually think and what they’re willing to tell a stranger with a clipboard. Some participants will exaggerate their income to appear more successful. Others will mask their true earnings to maintain discretion. People lie, embellish, forget details, or simply tell you what they think you want to hear.
The uncomfortable reality? Almost every sample includes some form of bias. There’s no perfect way around it. The question isn’t whether bias exists, but what kind of bias you’re dealing with.
As someone encountering statistics in news articles, work presentations, or social media feeds, your job isn’t to dismiss all survey data as worthless. Instead, become a bias detective. Every time you see survey results, ask yourself, Who got left out of this sample? Who chose to participate and why? What might respondents have been reluctant to admit honestly?
Once you start asking these questions, those confident statistical claims won’t quite have the same sway over you.
In the same way we approach statistics with reasonable caution, we also need to think critically when it comes to statements made in words. Here’s why: as a storytelling species, we’re wired to be easily swayed by a convincing narrative. So we have to be extra vigilant.
The first thing you need to do when confronted with a claim from some kind of authority is ask where their authority actually comes from. Are they presenting the data they’ve used to make their claim and showing their reasoning? Or are they just stating their opinion? If it’s just an opinion, how trustworthy are they?
Start with the basics. They might be a recognized expert in their field. Their work might appear in peer-reviewed journals, or they may have won significant awards or prizes in their field. None of this guarantees trustworthiness, but it’s a solid starting point.
When it comes to information online, there are other things to consider. If you’re looking at information on a website, what kind of domain name is it? Sites that end in .edu, .gov, and .org tend to offer more neutral reports from educational or nonprofit studies, compared to commercial websites with obvious agendas.
And of course you have to look out for counterknowledge – a term coined by UK journalist Damian Thompson, which is what we nowadays call “fake news.” Fake news doesn’t just happen in politics. You also get it in science, pseudohistory, celebrity gossip, and current affairs.
Here’s something crucial to remember: when an event is complex, you simply can’t explain everything, because not everything is reported or observed. Take President John F. Kennedy’s assassination – the only photographic evidence is from a low-resolution 18.3 frames per second camera. Conspiracy theorists love these gaps, but incomplete evidence doesn’t automatically indicate a cover-up.
The other thing to keep in mind is that many established theories rely on thousands of pieces of evidence. The existence of just a few holes in the theory isn’t enough to discredit it. Think climate science or evolution – these aren’t house-of-cards theories that collapse at the first sign of inconsistency. They’re robust frameworks built on mountains of data.
Ultimately, when it comes to claims made in words, it’s up to you to use your judgment. Do you trust the authority enough? Does the theory make sense? Are they asking you to believe something extraordinary based on flimsy evidence? Extraordinary claims require extraordinary evidence – not just a compelling story that fills in convenient gaps.
There’s no doubt that science has shaped how we think and what we do as a society. But do we know how it actually works? Most people have some pretty big misconceptions about the scientific process – and these misunderstandings matter when you’re trying to evaluate the flood of “breakthrough studies” hitting your news feed every day.
The first myth you need to abandon is that science is neat and tidy, with scientists all agreeing on what we know. In reality, science is full of controversy and debates about what we actually understand. Scientists are continuously doubting, questioning, and challenging each other’s work. This isn’t a bug in the system – it’s a feature. The messiness is what makes science robust.
The second myth is that scientific progress happens suddenly in big, dramatic leaps – like a lightbulb moment that changes everything overnight. Really, science is built bit by bit, by combining and cross-checking thousands of individual studies across multiple laboratories, until many results converge into a clearer picture.
That’s why the meta-analysis is so important when you encounter claims about revolutionary findings. A meta-analysis does exactly that kind of cross-checking by combining results from multiple studies to see if they point in the same direction. You should look out for them whenever someone presents a new “game-changing” discovery – they’re your best bet for separating genuine breakthroughs from overhyped single studies.
Now for the nuts and bolts of how scientists actually think. There are two types of reasoning they use: deduction and induction.
Deduction is when you start with a general observation and use logic to arrive at a specific conclusion. For example, it’s true that you’re a human. It’s also true that all humans are mortal. From these two statements we can deduce that you yourself are mortal. The conclusion is guaranteed if the premises are true.
Induction works differently. In this case, evidence exists to suggest that a certain conclusion is true, but it isn’t guaranteed. For example, if every existing bird we know has a beak, we can induce that if we discovered a new bird tomorrow, it would also have a beak. Probably true, but not absolutely certain.
When used correctly, deduction and induction allow scientists to suggest new hypotheses and arguments, which they can then test out and hopefully gain new knowledge about the world. But here’s the catch: it’s just as easy to be fooled by faulty logic when these tools are misused. In the last section, we’ll find out how.
Your brain is extremely good at finding patterns and order in the world around you. It’s one of humanity’s greatest evolutionary advantages – the ability to spot trends, connections, and meaning in chaos. But here’s the problem: your brain likes patterns so much that it often undermines your logical reasoning. These mental mistakes are what we call logical fallacies.
Picture this scenario: you get two phone calls in the same week from friends you just happened to be thinking about. Your brain immediately jumps to explanations – maybe it’s extrasensory perception, or some invisible connection between you and your friends. The coincidence feels meaningful, even mystical.
But now step back and consider all the phone calls you got that week which weren’t from friends you were thinking about. Add to this all the times you were thinking about someone and they didn’t call. Or how about the countless times you weren’t thinking about someone and they didn’t call? Suddenly, your two “psychic” calls look like a much smaller, far less meaningful number.
Our pattern-seeking tendency makes us vulnerable to another logical trap: framing – the way information is presented to you. Many people exploit this weakness by purposefully misframing information to support their agenda.
Here’s how this plays out in real life. Imagine a home-security salesperson tells you that 90 percent of home invasions are solved using video footage provided by the homeowner. Sounds impressive, right?
But let’s do a plausibility check. A quick internet search reveals that the FBI reports only about 30 percent of home robberies being solved overall. So what’s really happening with that salesperson’s statistic? They’re saying that of all the home invasion cases that are actually solved, 90 percent use home-recorded footage. That means 90 percent of 30 percent – or about 27 percent of total home robberies.
This is a much more accurate way of framing the information, but it’s far less impressive. And of course, it’s much less effective at getting you to buy the salesperson’s security system. But they already knew that when they chose their wording.
This is yet another reason we should all be dedicating some of our time to thinking critically, analyzing information, and drawing reasoned conclusions. In an era where misinformation spreads faster than facts, it’s the only way to push back against the mounting disinformation that floods our daily lives. Your pattern-loving brain is powerful – but it needs your logical mind as a partner, not a passenger.
The main takeaway of this lesson to A Field Guide to Lies by Daniel J. Levitin is that numbers aren’t as trustworthy as they appear. Statistics can mislead when they fail plausibility checks, averages hide crucial data ranges, and every survey suffers from sample, participation, and reporting biases that skew results.
You can spot fake experts by checking credentials and domain names, while understanding that real science is messy – built through thousands of cross-checked studies rather than sudden breakthroughs. Meta-analyses are your best tool for evaluating new claims.
Your brain is wired to find order, even where none exists. That instinct can make you vulnerable – mistaking coincidences for causes, or accepting skewed statistics and conspiracy theories that neatly “fill the gaps.”
That’s why critical thinking tools are so important. With plausibility checks, bias detection, and other techniques, you can cut through the noise and navigate today’s information-saturated world with clarity and healthy skepticism.
A Field Guide to Lies (2016) is a survival manual for our information-saturated world. With lessons on how to spot misleading statistics, arguments, and reports, its guidance is organized into two key areas: statistical information and faulty arguments. You’ll learn to recognize when numbers are being manipulated, and to avoid falling for logical fallacies in an age where misinformation spreads rapidly.
A Field Guide to Lies
Every day, you’re bombarded with information designed to persuade, inform, or influence you. A news headline claims a shocking statistic. A social media post shares a “groundbreaking” study. In our hyperconnected world, distinguishing truth from manipulation has become both more crucial and more challenging than ever.
The problem isn’t just that bad information exists – it’s that our brains are wired to accept convincing narratives and impressive-looking numbers without much scrutiny. We live in an age where anyone can dress up opinion as fact, where correlation gets mistaken for causation, and where a single cherry-picked study can launch a thousand misleading headlines.
In this lesson, we’ll explore a few of the tools Daniel Levitin proposes as a defense system against misinformation. You’ll learn how to spot misleading statistics, uncover the biases behind studies, and recognize false experts. You’ll also get an understanding of how science really works, and prime yourself against your own logical fallacies.
Let’s get to it.
You’re scrolling through your feed one afternoon when a bold statistic catches your eye: “Our top salesperson made 1,000 sales in a single day!” Your brain immediately thinks, Wow, that’s impressive. But here’s the thing – numbers have a sneaky way of making us believe they’re telling the absolute truth, simply because they look so official and precise.
The reality is far more complex. Behind every statistic sits a human being who gathered, interpreted, and presented that data. And humans, as we know, make mistakes – and lie. This is why developing a habit of doing quick plausibility checks can save you from falling into statistical traps.
Let’s return to that telemarketing claim. If closing a deal takes at least a minute, then you can close 60 deals in one hour. Which means – and this is being generous – an eight-hour workday would max out at 480 sales, assuming the person never took a break and closed every single call. Suddenly, that 1,000-sale claim doesn’t seem so believable, does it?
Now, even when statistics aren’t outright wrong, they can still mislead us in subtle ways. Take averages – those often helpful numbers that compress mountains of information into bite-sized pieces. The most commonly used form of average is called the mean. It’s calculated by summing up all the individual values in a sample and then dividing the sum by the total number of values.
Consider this statistic: “Human beings have one testicle on average.” This is technically true, if we’re calculating the mean of both men and women jointly. But is it telling us anything useful? The problem isn’t with the math – it’s the way the math was applied.
Here’s where averages become truly dangerous: they strip away information about extremes. Death Valley, California, boasts a mean temperature of a pleasant 77 degrees Fahrenheit. Sounds like perfect weather for a vacation, right? But venture there on the wrong day, and you could find yourself battling 134-degree heat or shivering in 15-degree cold. The average completely masks the range.
This same principle applies everywhere – from business reports that use “average customer satisfaction” to hide the fact that half your customers are furious, to salary surveys that obscure massive pay gaps.
So the next time you encounter a statistic, whether it’s in a news article, social media post, or business presentation, pause for a moment. Ask yourself, Does this number actually make sense? What story might it be hiding?
Studies are the go-to weapon for proving a point these days – newspaper articles cite them to support headlines, pharmaceutical companies use them to validate new drugs, and political campaigns wave poll numbers to claim momentum.
But behind all these statistics lies a fundamental challenge: to obtain reliable data, real people need to collect the information. Since you can’t study every single rock in the Atlantic or interview every single person in North America, you need to decide on a sample. That’s when the problem of sample bias creeps in.
Here’s how it works in practice. Say you want to survey San Franciscans about climate change attitudes. You head to Union Square and interview people across different ages, ethnicities, and dress styles, thinking you’ve got a representative sample. Wrong. You’ve already excluded massive chunks of the population: people sick at home, mothers with small children who can’t easily get downtown, night workers sleeping during the day.
Fine, you think – door-to-door surveys will solve this. But if you knock during daytime, you miss everyone working in town. Switch to nighttime, and you exclude party-goers, church attendees, and night-shift workers already at their jobs. Every approach systematically leaves someone out.
Even if you somehow managed to reach a perfect cross-section of people, two more insidious biases are waiting to undermine your results.
First comes participation bias. Not everyone you ask will agree to participate, and their reasons for declining can skew your data in predictable ways. A study about sexual attitudes will likely discourage more prudish people from participating. Political surveys might not attract people with neutral views who find such topics boring or divisive. The volunteers aren’t random – they’re self-selecting based on who cares enough to engage with your particular topic.
Then there’s reporting bias – the gap between what people actually think and what they’re willing to tell a stranger with a clipboard. Some participants will exaggerate their income to appear more successful. Others will mask their true earnings to maintain discretion. People lie, embellish, forget details, or simply tell you what they think you want to hear.
The uncomfortable reality? Almost every sample includes some form of bias. There’s no perfect way around it. The question isn’t whether bias exists, but what kind of bias you’re dealing with.
As someone encountering statistics in news articles, work presentations, or social media feeds, your job isn’t to dismiss all survey data as worthless. Instead, become a bias detective. Every time you see survey results, ask yourself, Who got left out of this sample? Who chose to participate and why? What might respondents have been reluctant to admit honestly?
Once you start asking these questions, those confident statistical claims won’t quite have the same sway over you.
In the same way we approach statistics with reasonable caution, we also need to think critically when it comes to statements made in words. Here’s why: as a storytelling species, we’re wired to be easily swayed by a convincing narrative. So we have to be extra vigilant.
The first thing you need to do when confronted with a claim from some kind of authority is ask where their authority actually comes from. Are they presenting the data they’ve used to make their claim and showing their reasoning? Or are they just stating their opinion? If it’s just an opinion, how trustworthy are they?
Start with the basics. They might be a recognized expert in their field. Their work might appear in peer-reviewed journals, or they may have won significant awards or prizes in their field. None of this guarantees trustworthiness, but it’s a solid starting point.
When it comes to information online, there are other things to consider. If you’re looking at information on a website, what kind of domain name is it? Sites that end in .edu, .gov, and .org tend to offer more neutral reports from educational or nonprofit studies, compared to commercial websites with obvious agendas.
And of course you have to look out for counterknowledge – a term coined by UK journalist Damian Thompson, which is what we nowadays call “fake news.” Fake news doesn’t just happen in politics. You also get it in science, pseudohistory, celebrity gossip, and current affairs.
Here’s something crucial to remember: when an event is complex, you simply can’t explain everything, because not everything is reported or observed. Take President John F. Kennedy’s assassination – the only photographic evidence is from a low-resolution 18.3 frames per second camera. Conspiracy theorists love these gaps, but incomplete evidence doesn’t automatically indicate a cover-up.
The other thing to keep in mind is that many established theories rely on thousands of pieces of evidence. The existence of just a few holes in the theory isn’t enough to discredit it. Think climate science or evolution – these aren’t house-of-cards theories that collapse at the first sign of inconsistency. They’re robust frameworks built on mountains of data.
Ultimately, when it comes to claims made in words, it’s up to you to use your judgment. Do you trust the authority enough? Does the theory make sense? Are they asking you to believe something extraordinary based on flimsy evidence? Extraordinary claims require extraordinary evidence – not just a compelling story that fills in convenient gaps.
There’s no doubt that science has shaped how we think and what we do as a society. But do we know how it actually works? Most people have some pretty big misconceptions about the scientific process – and these misunderstandings matter when you’re trying to evaluate the flood of “breakthrough studies” hitting your news feed every day.
The first myth you need to abandon is that science is neat and tidy, with scientists all agreeing on what we know. In reality, science is full of controversy and debates about what we actually understand. Scientists are continuously doubting, questioning, and challenging each other’s work. This isn’t a bug in the system – it’s a feature. The messiness is what makes science robust.
The second myth is that scientific progress happens suddenly in big, dramatic leaps – like a lightbulb moment that changes everything overnight. Really, science is built bit by bit, by combining and cross-checking thousands of individual studies across multiple laboratories, until many results converge into a clearer picture.
That’s why the meta-analysis is so important when you encounter claims about revolutionary findings. A meta-analysis does exactly that kind of cross-checking by combining results from multiple studies to see if they point in the same direction. You should look out for them whenever someone presents a new “game-changing” discovery – they’re your best bet for separating genuine breakthroughs from overhyped single studies.
Now for the nuts and bolts of how scientists actually think. There are two types of reasoning they use: deduction and induction.
Deduction is when you start with a general observation and use logic to arrive at a specific conclusion. For example, it’s true that you’re a human. It’s also true that all humans are mortal. From these two statements we can deduce that you yourself are mortal. The conclusion is guaranteed if the premises are true.
Induction works differently. In this case, evidence exists to suggest that a certain conclusion is true, but it isn’t guaranteed. For example, if every existing bird we know has a beak, we can induce that if we discovered a new bird tomorrow, it would also have a beak. Probably true, but not absolutely certain.
When used correctly, deduction and induction allow scientists to suggest new hypotheses and arguments, which they can then test out and hopefully gain new knowledge about the world. But here’s the catch: it’s just as easy to be fooled by faulty logic when these tools are misused. In the last section, we’ll find out how.
Your brain is extremely good at finding patterns and order in the world around you. It’s one of humanity’s greatest evolutionary advantages – the ability to spot trends, connections, and meaning in chaos. But here’s the problem: your brain likes patterns so much that it often undermines your logical reasoning. These mental mistakes are what we call logical fallacies.
Picture this scenario: you get two phone calls in the same week from friends you just happened to be thinking about. Your brain immediately jumps to explanations – maybe it’s extrasensory perception, or some invisible connection between you and your friends. The coincidence feels meaningful, even mystical.
But now step back and consider all the phone calls you got that week which weren’t from friends you were thinking about. Add to this all the times you were thinking about someone and they didn’t call. Or how about the countless times you weren’t thinking about someone and they didn’t call? Suddenly, your two “psychic” calls look like a much smaller, far less meaningful number.
Our pattern-seeking tendency makes us vulnerable to another logical trap: framing – the way information is presented to you. Many people exploit this weakness by purposefully misframing information to support their agenda.
Here’s how this plays out in real life. Imagine a home-security salesperson tells you that 90 percent of home invasions are solved using video footage provided by the homeowner. Sounds impressive, right?
But let’s do a plausibility check. A quick internet search reveals that the FBI reports only about 30 percent of home robberies being solved overall. So what’s really happening with that salesperson’s statistic? They’re saying that of all the home invasion cases that are actually solved, 90 percent use home-recorded footage. That means 90 percent of 30 percent – or about 27 percent of total home robberies.
This is a much more accurate way of framing the information, but it’s far less impressive. And of course, it’s much less effective at getting you to buy the salesperson’s security system. But they already knew that when they chose their wording.
This is yet another reason we should all be dedicating some of our time to thinking critically, analyzing information, and drawing reasoned conclusions. In an era where misinformation spreads faster than facts, it’s the only way to push back against the mounting disinformation that floods our daily lives. Your pattern-loving brain is powerful – but it needs your logical mind as a partner, not a passenger.
The main takeaway of this lesson to A Field Guide to Lies by Daniel J. Levitin is that numbers aren’t as trustworthy as they appear. Statistics can mislead when they fail plausibility checks, averages hide crucial data ranges, and every survey suffers from sample, participation, and reporting biases that skew results.
You can spot fake experts by checking credentials and domain names, while understanding that real science is messy – built through thousands of cross-checked studies rather than sudden breakthroughs. Meta-analyses are your best tool for evaluating new claims.
Your brain is wired to find order, even where none exists. That instinct can make you vulnerable – mistaking coincidences for causes, or accepting skewed statistics and conspiracy theories that neatly “fill the gaps.”
That’s why critical thinking tools are so important. With plausibility checks, bias detection, and other techniques, you can cut through the noise and navigate today’s information-saturated world with clarity and healthy skepticism.
Comments
Post a Comment