Brilliant Mistakes by Paul J. H. Schoemaker Finding Success on the Far Side of Failure

What's it about?

Brilliant Mistakes (2011) contends that trying to eliminate every misstep can backfire, while well-chosen errors can actually accelerate learning and improve performance. It explains why mistakes can yield benefits, when to avoid them, and how to design small, safe tests that expose hidden assumptions so you can make smarter decisions. It also lays out practical steps you can apply to learn faster from deliberate missteps.

On a frozen New Year’s morning, a road-weary band trudged into a London studio. Big labels had already passed on their music; their gear was beat-up; executives were unimpressed. Yet a young manager heard something – wit, possibility – and signed them anyway. Many at the time called it a mistake. History calls it the start of the Beatles.

Sometimes, what might seem like the “wrong” move can turn out to be right. We tend to judge by outcomes, but luck and timing blur the picture. A better lens is the quality of thinking at the moment of choice. When errors are small, safe, and designed to reveal hidden assumptions, they accelerate learning – and occasionally unlock breakthroughs no plan could script.

In this lesson, you’ll find out how to spot useful errors, build low-cost experiments that probe your riskiest assumptions, run a portfolio of reversible bets, and review missteps with a forensic eye. You’ll learn ways to prime your attention so you catch weak cues that others miss – and turn them into insight. Ready to make smarter mistakes? Then let’s begin.
In the 1920s, researchers at the Western Electric Hawthorne Works near Chicago ran a simple experiment. They increased the lighting to see if brighter conditions would boost worker productivity. Output did rise – but then something unexpected happened: it rose again when the lights were dimmed. The real driver, it turned out, wasn’t the lighting at all – it was the act of being observed. The attention made workers feel seen and valued, which in turn raised their effort. That insight helped spark a new understanding of workplace motivation. What began as a flawed theory led somewhere useful.

That’s the kind of error worth paying attention to – not slip-ups or blunders, but decisions that seem wrong at first and turn out to hold value. To get there, we have to let go of the idea that mistakes are defined by their outcomes. Many decisions that look smart in hindsight – like George Martin signing the Beatles, or Bill Gates dropping out of Harvard – only look that way because things happened to work out.

A better approach is to judge decisions by the quality of thinking that went into them, given the information available at the time. That’s harder, but more honest. Outcomes are shaped by many things you can’t control: how long you wait before judging, what randomness enters the mix, what actions follow the decision, and what alternatives were never chosen. And since there’s no neutral umpire in life, what counts as a mistake often depends on who’s keeping score – and when they decide to call it.
In 1961, meteorologist Edward Lorenz reran a weather simulation by hand-entering numbers from a printout, rounding them slightly to save time. To his surprise, the results diverged wildly. The cause? A minuscule difference in decimal values – enough to shift the system entirely. What began as a mistake revealed something profound: tiny changes in complex systems can trigger massive ripples. That insight led to the butterfly effect and laid the groundwork for chaos theory.

This is a prime example of a brilliant mistake – a misstep that yields benefits far beyond its cost by surfacing insights no one could have predicted. These kinds of mistakes require two things. First, something has to go wrong in an unexpected way. And second, that disruption has to open the door to deep, valuable learning.

Inventors often rely on this dynamic. Thomas Edison tested hundreds of filaments before finding one that worked. James Dyson went through more than 5,000 failed prototypes to build his bagless vacuum cleaner. Their progress emerged from testing, failing, and refining over and over again rather than flawless plans.

That same principle applies to questioning assumptions. In a field test, psychologist Ian Walker used an ultrasonic sensor and found that drivers passed closer when he wore a helmet and gave more space when he wore a wig. In drug development, teams were asked to design early trials to try and prove a compound would fail, focusing on toxicity and interactions; that switch brought hidden risks into view sooner.

These examples show that to create conditions for brilliant mistakes, we need small, low-cost tests that can fail safely, surface insights worth far more than they cost, and reduce exposure to costly errors.
Most people don’t like to dwell on their mistakes – especially when they’re painful or public. But if you want to learn from them, you need a forensic mindset: one that puts ego aside and probes what really went wrong. That’s harder than it sounds. Emotions like embarrassment, fear, or shame often trigger defensive reactions – denial, blame, or rationalization – that block the path to insight.

John Stallings, a mathematician, published a high-profile proof of a major theorem – then publicly retracted it. He owned his error, describing how excitement and wanting to be right blinded him to flaws in his thinking. This kind of intellectual humility is rare, but essential for deeper learning.

The criminal justice system provides a more troubling case. Kevin Green was wrongly convicted of attacking his pregnant wife. Only years later did DNA evidence reveal the real culprit. Faulty eyewitness memory, unreliable forensics, and biased testimony all played a role – along with systemic pressures that encouraged tunnel vision. The story is tragic, but it also led to reforms. Work by organizations like the Innocence Project has helped expose deeper patterns of error and improve how evidence is handled.

Emotion isn’t the only barrier to learning. Cognitive traps also get in the way. Feedback may be distorted, incomplete, or missing altogether. People rely on outdated rules of thumb, seek confirming evidence, and avoid ambiguity. Even group decisions – supposedly a safeguard – can suffer from groupthink or political pressure.

To make better decisions, break the process down. First, frame the problem – question assumptions, define your goals, and explore all options. Then gather intelligence, being mindful of bias. Make a habit of looking for disconfirming evidence. Next, draw conclusions without rushing to certainty or deferring to consensus. Finally, track outcomes carefully and stay open to feedback. Balance performance with learning by allowing small, safe failures that deliver real feedback.

Every mismatch between what you expected and what actually happened is a chance to learn. That opportunity only becomes real when you stop protecting your ego and start asking what the mistake might be trying to teach you.
While most people try to avoid errors, the smartest entrepreneurs, scientists, and decision-makers actually treat certain kinds of mistakes as tools for faster learning. As Oscar Wilde once said, “Experience is the name we give to our mistakes.” The real trick is making the kind of mistakes that teach you something valuable – quickly, cheaply, and safely.

That’s how venture capitalists think. They expect startups to stumble early, but look for founders who can adapt fast. One investor noticed a clear pattern: the most successful ventures changed direction midstream. The ones that failed tended to stick too rigidly to their original plans. Speeding up the learning loop, even through failure, is more important than getting it right on the first try.

The same idea applies to everyday life. Maria Dahvana Headley, frustrated by dating in New York, said yes to every date for a year – except, of course, from anyone obviously dangerous. She made plenty of “mistakes.” But each one taught her something about what really mattered. In the process, she learned faster – and ended up finding a relationship she might have missed otherwise.

We resist this approach for four reasons: overconfidence, fear of failure, confirmation bias, and unreliable feedback. But research shows that testing ideas you expect to be wrong can reveal hidden truths. In one case, students only discovered a number pattern when they deliberately tested options that violated their assumptions.

The key is to make mistakes with discipline. Define what you believe, test the riskiest assumptions in low-cost ways, and extract real feedback. One consulting firm used a six-step process to test core assumptions: identify, score for importance and certainty, rank, design a safe test, execute, and learn. When they applied this method to a RFP, or request for proposal, it not only revealed unexpected insights but also generated over $1 million in follow-up work.

Done right, deliberate mistakes don’t just teach you faster – they open doors you didn’t know were there.
What if doing the wrong thing – on purpose – was the smartest move you could make?

That’s what happened when a young employee at a major pet-food company urged management to track sales in nontraditional channels like pet stores and vet clinics – even though supermarket data showed steady performance. The decision looked irrational and costly. But it revealed a hidden decline, allowing the company to adapt before it was too late.

This exemplifies deliberate mistakes – actions that defy conventional thinking to test hidden assumptions. They may look foolish, but in fast-moving or complex environments, they often unlock insights that couldn’t be reached any other way.

There are two main approaches here: either challenge conventional wisdom based on a hunch, or deliberately act against your own assumptions to test your thinking. That’s what ad legend David Ogilvy did – running campaigns he expected to fail to see if his thinking was still sharp. Some flopped. One, the Hathaway man with the eye patch, became a classic.

Here’s another powerful example from the Bell System, the former US telephone monopoly. Faced with over $450 million in annual losses from bad debt, Bell Labs scientists proposed a radical test: stop requiring deposits from new customers with poor credit. Management reluctantly agreed, and many supposedly risky customers turned out to be reliable. The experiment revealed flaws in their existing credit model and led to improved prediction systems – adding $137 million in annual profits for a decade.

Deliberate mistakes differ from calculated experiments. In strict terms, they’re tests you’d normally reject on expected value; you run them to expose buried assumptions your usual process can’t see. In other words, you don’t make them expecting success – you make them to reveal insights your normal decision process might miss. These experiments work best in complex, changing environments, where small, low-cost tests can reveal hidden assumptions without risking too much.

Organizations often resist this mindset unless leaders actively support learning. So grant permission to question dogma, revisit old assumptions, and back ideas that don’t pass a standard cost-benefit test. As IBM founder Thomas J. Watson Sr. said, “Go ahead and make mistakes, make all you can. Because that’s where you will find success – on the far side of failure.”
You’ve learned that the smartest way to protect yourself from failure is to make more mistakes. And the more mistakes, the better.

That’s the idea behind a portfolio approach to failure. Instead of avoiding error at all costs, smart individuals and organizations make multiple, deliberate mistakes that are low-cost, reversible, and diversified “bets.” Just as investors reduce risk by spreading money across different assets, mistake-makers hedge against overreliance on conventional wisdom by testing different assumptions at once.

One failure might be painful. But many small mistakes, when designed well, can reveal insights the crowd misses. This broadens your learning and reduces the risk of being blindsided. And when a few bets succeed, they often do so disproportionately.

This doesn’t mean being reckless. As an African proverb warns, “Never test the depth of a river with both feet.” Portfolio thinking means keeping one foot grounded while the other explores new territory. It also means choosing tests that behave differently from each other – that are negatively correlated, in financial terms – so if one fails, another might succeed.

People struggle to think this way. Research shows we often make decisions in isolation, not as part of a system. But choosing experiments strategically – as DARPA did with its robot-car challenge – can yield breakthroughs. Under pressure to create autonomous military vehicles, DARPA invited teams of amateurs to race unmanned prototypes across the Nevada desert. Most failed spectacularly. But instead of shutting the project down, DARPA doubled the prize money and repeated the test.

The next year, several vehicles completed the course – an enormous leap forward. By encouraging failure and broadening participation, DARPA tapped unexpected talent and accelerated innovation. The lesson? Embracing variability, controlling risk, and applying filters to find the most promising bets can make a high-failure strategy pay off.

Done right, a portfolio of mistakes exposes blind spots and surfaces discoveries no one else sees. The goal is to fail smart – at the right scale, across the right range of experiments. A few deliberate missteps can keep you balanced while the rest of the world wades in too deep.
When Alexander Fleming came back from holiday in 1928, he noticed something odd on a discarded petri dish: a ring of dead bacteria surrounding a patch of mold. He paused, leaned in, and realized something remarkable had happened. It wasn’t an instant breakthrough; years of trial and error followed until Florey and Chain isolated and concentrated penicillin. But by 1942, the drug had saved a human life for the first time – and soon millions more – earning all three men a Nobel Prize.

This “fortuitous” accident wasn’t a one-off. Six years earlier, a tiny drip from Fleming’s nose had landed on one of his cultures, leading him to discover lysozyme, the body’s natural antibacterial enzyme. You could call these strokes of luck, but they were really the product of curiosity and a mind tuned to notice the unusual. Fleming’s chaotic lab style created more chances to spot weak signals – but he also had the eye to recognize them and the discipline to test them.

Great discoveries often begin with unexpected outcomes. But to capitalize on these outcomes, you need to notice the anomaly, slow down, question assumptions, and shift your perspective. That loop – mistake, reflection, reframing, insight – is at the heart of brilliant mistakes.

Prepared minds cultivate their peripheral vision, resist jumping to conclusions, and explore problems from multiple, sometimes conflicting angles. At an organizational level, this looks like listening to frontline voices, testing rival hypotheses, encouraging debate, trusting intuition, and keeping systems flexible enough to let chance in.

Discovery, it turns out, favors those who look twice.
This lesson to Brilliant Mistakes by Paul J. H. Schoemaker shows how small, low-cost, deliberate missteps can reveal hidden assumptions – and teach you more than smooth successes ever could. These “smart failures” emerge when setbacks become opportunities for insight. By fostering debate, testing rival hypotheses, and training yourself to spot anomalies, you create a system that lets you fail safely, learn quickly, and make sharper, better-informed decisions.

Comments

Popular posts from this blog

Lessons from the Book πŸ“– New Great Depression

Worthy of Her Trust: What You Need to Do to Rebuild Sexual Integrity and Win Her Back by Stephen Arterburn & Jason B. Martinkus

lessons from. the book πŸ“– Alexander Hamilton