Decision-Driven Analytics by Bart de Langhe & Stefano Puntoni Leveraging Human Intelligence to Unlock the Power of Data
What's it about?
Decision-Driven Analytics (2024) challenges the traditional approach of data-driven decision-making by proposing that organizations should begin with the decisions they need to make rather than starting with available data. It presents a framework built on four pillars that helps bridge the gap between data analysts and business decision-makers, addressing the common problem of the failure of analytics efforts when data analysis becomes disconnected from actual business decisions. Rather than treating data as the starting point, this approach emphasizes human judgment in determining which questions matter most for organizational impact.
When it comes to handling data, people naturally split into two distinct camps: divers and runners.
Divers love plunging deep into datasets. They’re the ones who get genuinely excited about statistical models and find satisfaction in wrestling with complex algorithms. Runners operate differently. They’re focused on the heartbeat of business – understanding customers, sensing market shifts, knowing instinctively what will move the needle.
In today’s Big Data obsession, many businesses have become fixated on the divers. They pour resources into analytics teams and infrastructure, operating under a seductive assumption – that enough data and computational power will automatically generate good decisions.
But it doesn’t work that way. Successful businesses need both perspectives working together. Data without business judgment is just noise. Business instinct without data support is just guesswork. That’s where decision-driven analytics comes in – a framework that treats both groups as equal partners, rather than putting data on a pedestal.
This lesson is a call to action, urging businesses to remember that the humans in a business – the managers, executives, and decision-makers – aren’t obstacles to overcome with better algorithms. They’re essential to making data actually useful.
In the chapters ahead, you’ll discover how decision-driven analytics works and how to apply it in your own business, cutting through today’s data overload to make genuinely better decisions.
Let’s jump in.
The business world has fallen head over heels for big data and machine learning. Everywhere you look, companies are rushing to become "data-driven," convinced that algorithms will eliminate messy human errors and biases. It’s an appealing vision: let the numbers do the talking, and perfect decisions will follow.
But despite all this investment in analytics, many business leaders are discovering their data initiatives aren’t delivering. In one survey, only about a third of chief data officers – the very executives championing these data transformations – believed their own role was well-established and successful. Even the people running the show are skeptical.
So what’s going wrong?
The core problem is surprisingly simple: organizations are focusing on the data itself rather than the decisions they need to make. They’re generating impressive analyses that float untethered from any actual business choice. It’s like building a magnificent bridge that doesn’t connect to either shore.
Two forces are driving this misguided approach. First, behavioral science has spent years highlighting how error-prone human judgment can be. Second, technology has exploded – AI and super-fast computers can process datasets we couldn’t dream of analyzing just a few years ago. Put these together, and the conclusion seems obvious: replace flawed human thinking with objective data analysis.
Now, while many businesses are putting data before humans, some are doing something even worse: preference-driven analytics. This is when executives decide what they want to do first, then send analysts hunting for data to justify that decision. It’s confirmation bias masquerading as rigorous analysis, and it’s rampant across business.
Decision-driven analytics offers a radically different approach by reversing the entire sequence.
You start by identifying the actual decision you need to make. Not vague aspirations, but concrete choices with real alternatives facing your organization. Then you ask specific questions that would genuinely help you choose between those options. What information would actually change your mind? What would make one path clearly superior to another?
Only after clarifying your decision and defining your questions do you start collecting data to answer them.
This isn’t just a subtle shift – it’s about fundamentally reimagining how data serves business. Instead of letting available information dictate your questions, you let necessary decisions guide what data you seek.
In the next sections we’ll see how decision-driven analytics works. We’ll start with the first step: decisions.
Leadership teams often blame poor communication when data presentations fall flat. The numbers seem too complex, the analysts too technical, the insights too obscure. But dig deeper and you’ll find the real problem usually flows in the opposite direction.
Leaders often haven’t clarified the fundamental question here: What decision are we actually trying to make? Without that clarity, even the most sophisticated analysis becomes theatre – impressive to watch but ultimately pointless. You can’t evaluate whether data is useful until you know what choice it’s supposed to inform.
This is why decision-driven analytics starts with something unglamorous but essential: building a clear list of decision alternatives. Not questions to explore or problems to understand, but actual choices you need to make.
The challenge? Most teams suffer from what the authors call “bounded awareness” – they only see the options already on their radar. Breaking free requires deliberately seeking outside perspectives. Consider an audio team trying to improve a car’s sound experience. By consulting with the engine department, they might discover that optimal sound is not just about the car’s speakers, but also depends on the noise the engine makes. Fresh viewpoints reveal alternatives you’d never imagine sitting in your usual meeting room.
But here’s the thing: not every possible decision deserves a spot on your list. You need three filters to whittle them down.
First, include only decisions within your control – options you can actually implement. Second, ensure they’re feasible by excluding choices with prohibitive costs or unacceptable risks. Third, focus on decisions with genuine potential impact. If an option barely moves your key metrics, it’s not worth the analytical firepower.
This disciplined approach transforms decision-making from abstract aspiration to concrete preparation. You’re not just thinking about choices anymore – you’re systematically mapping the territory where data can actually create value.
You’ve mapped your decision alternatives. Now comes the critical next step: crafting the right questions. And this is where many organizations stumble badly.
Picture a manager walking up to the analytics team with this request: “How do we increase gross income?” Seems reasonable, right? Actually, it’s what’s known as a “fuzzy” question – and it’s a recipe for mutual frustration.
Here’s why: that question belongs in the management department, not the analytics lab. It’s asking for business strategy, not data analysis. The analysts will spin their wheels trying to answer something unanswerable with data alone. Leadership will end up disappointed by results that feel disconnected from their actual needs.
Decision-driven analytics requires leaders to do the hard work upfront – sharpening their questions until they’re precise enough that data can actually help rank options against each other. A subscription service, for instance, shouldn’t ask, “How do we retain customers?” Instead, they might ask, “Which specific customer segments would respond most profitably to a targeted incentive scheme?” Notice the difference? The second question creates space for data to do meaningful work.
Another crucial distinction leaders need to grasp is the difference between factual and counterfactual questions.
Factual questions are purely predictive. An online retailer asking, “Which products are most likely to be returned?” is simply seeking patterns in existing data. Counterfactual questions probe deeper. They ask what would happen with an intervention versus without one. They’re comparing parallel universes – one in which you act, one in which you don’t.
This distinction isn’t academic hairsplitting. It fundamentally changes how you allocate resources. Consider the 2012 Obama presidential campaign. Obama’s data scientists could have tried to prioritize voters based on the factual question of who would most likely vote for him.
But instead, they framed it counterfactually. They built their models based on the question of who would most likely be swayed if targeted. This reframing was strategic gold. By focusing on persuadable voters, they conserved valuable campaign resources and deployed them with surgical precision.
The lesson? Before analysts touch a single dataset, leaders must clarify their questions. Only then can data analysis yield genuinely useful results.
In 2012, JP Morgan lost $6 billion in what became infamously known as the “London Whale Trade.” When investigators dug into what went wrong, they discovered something almost embarrassingly simple: a typo in an Excel spreadsheet. This tiny slipup had played a significant role in causing the financial catastrophe.
Relying too heavily on data to make decisions puts you at considerable risk. But often it’s not about the data itself; it’s how we interpret it.
Consider what happened when Apple gave users the option to refuse tracking. Meta launched a campaign claiming that without targeted ads, small businesses could lose up to 60 percent of their sales per advertising dollar spent. The message was clear: You need our algorithm, or your business will collapse.
They backed this claim with A/B testing, comparing ad revenue between two groups of campaigns – one using targeted ads, and one not. The campaigns using targeting generated more revenue, and Meta attributed this to their algorithm’s brilliance.
But wait. Those customers who were “more likely to spend” because of targeting? They were also more likely to spend anyway, based on their underlying characteristics. Let’s say Meta’s algorithm targeted high-spend customers. These customers would have spent more even if they had been reached with untargeted ads. That means we can’t attribute higher returns on ads solely to the effectiveness of Meta’s targeting techniques.
So how do you protect against these data pitfalls? The instinctive response is to add more data – more variables, more groups, more subgroups, until you’ve covered every possible angle. Big Data has made this approach increasingly tempting.
But the reality is that more data often creates a false sense of confidence without delivering more reliable results. What actually matters isn’t the volume of data you collect. It’s whether the specific data you’re gathering is the data required to answer the questions relevant to your decisions. That precision – not comprehensiveness – is what’s crucial to decision-driven analytics.
More data doesn’t equal better decisions. The right data does.
In 2022, Elon Musk bought Twitter for $44 billion. The next year, he announced he’d be rebranding it completely, wiping away one of the most recognizable names in tech.
How much money was he throwing away? Brand valuation firm Brand Finance pegged the Twitter brand at $3.9 billion. Other estimations valued it at $20 billion, or five times higher. When faced with questions like these, we instinctively reach for precise numbers to anchor our thinking. But is this really the right approach?
Here’s the thing: precision can be incredibly useful when you’re trying to convince others of something. A specific figure commands authority. But when you’re trying to understand the world – which is inherently complex and messy – you should be embracing uncertainty, not eliminating it.
When you encounter precise figures, it’s worth pausing and investigating the range behind them. Knowing that Twitter’s valuation could lie somewhere between $3 billion and $100 billion is actually more useful than locking onto a single figure. The range reveals the uncertainty, and this is important information.
You should also watch for the categories your mind imposes on the world to create false precision. Just consider the Myers-Briggs Type Indicator, which is widely used in business settings. This test relies on yes/no answers to questions that genuinely call for more nuanced responses. It forces continuous spectrums of personality into neat binary boxes.
Remember the principle about avoiding “fuzzy” questions? Well, as a decision-maker, you should be asking “fussy” questions – clear, precise, and well-structured inquiries that guide analysis toward useful answers.
But when it comes to the data, you should actually be looking for “fuzzy” answers. Precise answers might give you the comforting illusion of clarity and certainty. But fuzzy answers more accurately represent the world as it actually exists – probabilistic, uncertain, and complex.
What’s more, fuzzy answers allow you to go back and refine your questions. They reveal where your understanding is thin, where assumptions are shaky, and where more investigation is needed. This iterative process ultimately leads you toward the genuine insights you’re seeking.
Embrace the fuzzy. It’s where real understanding lives.
We’ve now covered the key elements of decision-driven analytics: decisions, questions, data, and answers. But even if you incorporate all the guidelines we’ve seen, you still have some limitations to deal with.
Those limitations have a name: resources. Time, money, analytical capacity – they’re all finite. Which means you need to prioritize ruthlessly. You need to ask yourself which answers are actually going to be most useful for your business’s goals.
To make those prioritization decisions, there are three critical questions you need to answer first. Question one is, “How important is the decision, really?”
Imagine you’re choosing between yellow sticky notes and blue sticky notes for the office. They’re identical except for color, and they cost the same. Should you collect data and analyze employee color preferences? Of course not. You can just pick one and move on. Some decisions simply don’t warrant analytical firepower.
Question two is, “Is the question relevant?” Let’s say you’re an HR professional deciding whether to give employees one or two days a week to work from home, aiming to improve productivity. You might ask, “How many emails are employees sending?” But stop – is this actually relevant? Email volume might reflect inefficiencies, spam floods, or pointless reply-all chains rather than genuine productivity.
A more relevant question would be, “Do employees feel more motivated when working from home?” One question measures activity noise; the other targets what you actually care about.
The third question for whether analysis is warranted is, “What is the cost of collecting the data?” Some data is simply too expensive or time-consuming to gather. Some questions call for controlled experiments rather than an online survey. But these require specialized equipment, trained personnel, and significantly more time. Other questions demand massive datasets that make sense for an international company but would be absurd for a five-person startup.
In the end, you have to remember that data is a means to an end. The decisions you make are what count. That’s what decision-driven analytics is fundamentally about – not simply amassing information, but moving purposefully toward better choices.
In this lesson to Decision-Driven Analytics by Bart de Langhe and Stefano Puntoni, you’ve learned that data analysis alone cannot provide businesses with the answers they need to make better business decisions.
Instead of starting with available data and searching for uses, decision-driven analytics flips the sequence entirely: Begin with concrete decisions you need to make, craft precise questions that help you choose between options, then collect only the specific data required to answer those questions.
Create and refine your decisions by seeking diverse perspectives and then trimming your list. Avoid “fuzzy” questions and distinguish clearly between factuals and counterfactuals. Collect the right data – not just more data – and be wary of interpretational pitfalls. And embrace uncertainty in answers, using it to refine your questions. Finally, allocate your resources by prioritizing important, relevant, and feasible enquiries, and remember that data is merely a means to an end. The decisions you make are what ultimately matter.
Decision-Driven Analytics (2024) challenges the traditional approach of data-driven decision-making by proposing that organizations should begin with the decisions they need to make rather than starting with available data. It presents a framework built on four pillars that helps bridge the gap between data analysts and business decision-makers, addressing the common problem of the failure of analytics efforts when data analysis becomes disconnected from actual business decisions. Rather than treating data as the starting point, this approach emphasizes human judgment in determining which questions matter most for organizational impact.
When it comes to handling data, people naturally split into two distinct camps: divers and runners.
Divers love plunging deep into datasets. They’re the ones who get genuinely excited about statistical models and find satisfaction in wrestling with complex algorithms. Runners operate differently. They’re focused on the heartbeat of business – understanding customers, sensing market shifts, knowing instinctively what will move the needle.
In today’s Big Data obsession, many businesses have become fixated on the divers. They pour resources into analytics teams and infrastructure, operating under a seductive assumption – that enough data and computational power will automatically generate good decisions.
But it doesn’t work that way. Successful businesses need both perspectives working together. Data without business judgment is just noise. Business instinct without data support is just guesswork. That’s where decision-driven analytics comes in – a framework that treats both groups as equal partners, rather than putting data on a pedestal.
This lesson is a call to action, urging businesses to remember that the humans in a business – the managers, executives, and decision-makers – aren’t obstacles to overcome with better algorithms. They’re essential to making data actually useful.
In the chapters ahead, you’ll discover how decision-driven analytics works and how to apply it in your own business, cutting through today’s data overload to make genuinely better decisions.
Let’s jump in.
The business world has fallen head over heels for big data and machine learning. Everywhere you look, companies are rushing to become "data-driven," convinced that algorithms will eliminate messy human errors and biases. It’s an appealing vision: let the numbers do the talking, and perfect decisions will follow.
But despite all this investment in analytics, many business leaders are discovering their data initiatives aren’t delivering. In one survey, only about a third of chief data officers – the very executives championing these data transformations – believed their own role was well-established and successful. Even the people running the show are skeptical.
So what’s going wrong?
The core problem is surprisingly simple: organizations are focusing on the data itself rather than the decisions they need to make. They’re generating impressive analyses that float untethered from any actual business choice. It’s like building a magnificent bridge that doesn’t connect to either shore.
Two forces are driving this misguided approach. First, behavioral science has spent years highlighting how error-prone human judgment can be. Second, technology has exploded – AI and super-fast computers can process datasets we couldn’t dream of analyzing just a few years ago. Put these together, and the conclusion seems obvious: replace flawed human thinking with objective data analysis.
Now, while many businesses are putting data before humans, some are doing something even worse: preference-driven analytics. This is when executives decide what they want to do first, then send analysts hunting for data to justify that decision. It’s confirmation bias masquerading as rigorous analysis, and it’s rampant across business.
Decision-driven analytics offers a radically different approach by reversing the entire sequence.
You start by identifying the actual decision you need to make. Not vague aspirations, but concrete choices with real alternatives facing your organization. Then you ask specific questions that would genuinely help you choose between those options. What information would actually change your mind? What would make one path clearly superior to another?
Only after clarifying your decision and defining your questions do you start collecting data to answer them.
This isn’t just a subtle shift – it’s about fundamentally reimagining how data serves business. Instead of letting available information dictate your questions, you let necessary decisions guide what data you seek.
In the next sections we’ll see how decision-driven analytics works. We’ll start with the first step: decisions.
Leadership teams often blame poor communication when data presentations fall flat. The numbers seem too complex, the analysts too technical, the insights too obscure. But dig deeper and you’ll find the real problem usually flows in the opposite direction.
Leaders often haven’t clarified the fundamental question here: What decision are we actually trying to make? Without that clarity, even the most sophisticated analysis becomes theatre – impressive to watch but ultimately pointless. You can’t evaluate whether data is useful until you know what choice it’s supposed to inform.
This is why decision-driven analytics starts with something unglamorous but essential: building a clear list of decision alternatives. Not questions to explore or problems to understand, but actual choices you need to make.
The challenge? Most teams suffer from what the authors call “bounded awareness” – they only see the options already on their radar. Breaking free requires deliberately seeking outside perspectives. Consider an audio team trying to improve a car’s sound experience. By consulting with the engine department, they might discover that optimal sound is not just about the car’s speakers, but also depends on the noise the engine makes. Fresh viewpoints reveal alternatives you’d never imagine sitting in your usual meeting room.
But here’s the thing: not every possible decision deserves a spot on your list. You need three filters to whittle them down.
First, include only decisions within your control – options you can actually implement. Second, ensure they’re feasible by excluding choices with prohibitive costs or unacceptable risks. Third, focus on decisions with genuine potential impact. If an option barely moves your key metrics, it’s not worth the analytical firepower.
This disciplined approach transforms decision-making from abstract aspiration to concrete preparation. You’re not just thinking about choices anymore – you’re systematically mapping the territory where data can actually create value.
You’ve mapped your decision alternatives. Now comes the critical next step: crafting the right questions. And this is where many organizations stumble badly.
Picture a manager walking up to the analytics team with this request: “How do we increase gross income?” Seems reasonable, right? Actually, it’s what’s known as a “fuzzy” question – and it’s a recipe for mutual frustration.
Here’s why: that question belongs in the management department, not the analytics lab. It’s asking for business strategy, not data analysis. The analysts will spin their wheels trying to answer something unanswerable with data alone. Leadership will end up disappointed by results that feel disconnected from their actual needs.
Decision-driven analytics requires leaders to do the hard work upfront – sharpening their questions until they’re precise enough that data can actually help rank options against each other. A subscription service, for instance, shouldn’t ask, “How do we retain customers?” Instead, they might ask, “Which specific customer segments would respond most profitably to a targeted incentive scheme?” Notice the difference? The second question creates space for data to do meaningful work.
Another crucial distinction leaders need to grasp is the difference between factual and counterfactual questions.
Factual questions are purely predictive. An online retailer asking, “Which products are most likely to be returned?” is simply seeking patterns in existing data. Counterfactual questions probe deeper. They ask what would happen with an intervention versus without one. They’re comparing parallel universes – one in which you act, one in which you don’t.
This distinction isn’t academic hairsplitting. It fundamentally changes how you allocate resources. Consider the 2012 Obama presidential campaign. Obama’s data scientists could have tried to prioritize voters based on the factual question of who would most likely vote for him.
But instead, they framed it counterfactually. They built their models based on the question of who would most likely be swayed if targeted. This reframing was strategic gold. By focusing on persuadable voters, they conserved valuable campaign resources and deployed them with surgical precision.
The lesson? Before analysts touch a single dataset, leaders must clarify their questions. Only then can data analysis yield genuinely useful results.
In 2012, JP Morgan lost $6 billion in what became infamously known as the “London Whale Trade.” When investigators dug into what went wrong, they discovered something almost embarrassingly simple: a typo in an Excel spreadsheet. This tiny slipup had played a significant role in causing the financial catastrophe.
Relying too heavily on data to make decisions puts you at considerable risk. But often it’s not about the data itself; it’s how we interpret it.
Consider what happened when Apple gave users the option to refuse tracking. Meta launched a campaign claiming that without targeted ads, small businesses could lose up to 60 percent of their sales per advertising dollar spent. The message was clear: You need our algorithm, or your business will collapse.
They backed this claim with A/B testing, comparing ad revenue between two groups of campaigns – one using targeted ads, and one not. The campaigns using targeting generated more revenue, and Meta attributed this to their algorithm’s brilliance.
But wait. Those customers who were “more likely to spend” because of targeting? They were also more likely to spend anyway, based on their underlying characteristics. Let’s say Meta’s algorithm targeted high-spend customers. These customers would have spent more even if they had been reached with untargeted ads. That means we can’t attribute higher returns on ads solely to the effectiveness of Meta’s targeting techniques.
So how do you protect against these data pitfalls? The instinctive response is to add more data – more variables, more groups, more subgroups, until you’ve covered every possible angle. Big Data has made this approach increasingly tempting.
But the reality is that more data often creates a false sense of confidence without delivering more reliable results. What actually matters isn’t the volume of data you collect. It’s whether the specific data you’re gathering is the data required to answer the questions relevant to your decisions. That precision – not comprehensiveness – is what’s crucial to decision-driven analytics.
More data doesn’t equal better decisions. The right data does.
In 2022, Elon Musk bought Twitter for $44 billion. The next year, he announced he’d be rebranding it completely, wiping away one of the most recognizable names in tech.
How much money was he throwing away? Brand valuation firm Brand Finance pegged the Twitter brand at $3.9 billion. Other estimations valued it at $20 billion, or five times higher. When faced with questions like these, we instinctively reach for precise numbers to anchor our thinking. But is this really the right approach?
Here’s the thing: precision can be incredibly useful when you’re trying to convince others of something. A specific figure commands authority. But when you’re trying to understand the world – which is inherently complex and messy – you should be embracing uncertainty, not eliminating it.
When you encounter precise figures, it’s worth pausing and investigating the range behind them. Knowing that Twitter’s valuation could lie somewhere between $3 billion and $100 billion is actually more useful than locking onto a single figure. The range reveals the uncertainty, and this is important information.
You should also watch for the categories your mind imposes on the world to create false precision. Just consider the Myers-Briggs Type Indicator, which is widely used in business settings. This test relies on yes/no answers to questions that genuinely call for more nuanced responses. It forces continuous spectrums of personality into neat binary boxes.
Remember the principle about avoiding “fuzzy” questions? Well, as a decision-maker, you should be asking “fussy” questions – clear, precise, and well-structured inquiries that guide analysis toward useful answers.
But when it comes to the data, you should actually be looking for “fuzzy” answers. Precise answers might give you the comforting illusion of clarity and certainty. But fuzzy answers more accurately represent the world as it actually exists – probabilistic, uncertain, and complex.
What’s more, fuzzy answers allow you to go back and refine your questions. They reveal where your understanding is thin, where assumptions are shaky, and where more investigation is needed. This iterative process ultimately leads you toward the genuine insights you’re seeking.
Embrace the fuzzy. It’s where real understanding lives.
We’ve now covered the key elements of decision-driven analytics: decisions, questions, data, and answers. But even if you incorporate all the guidelines we’ve seen, you still have some limitations to deal with.
Those limitations have a name: resources. Time, money, analytical capacity – they’re all finite. Which means you need to prioritize ruthlessly. You need to ask yourself which answers are actually going to be most useful for your business’s goals.
To make those prioritization decisions, there are three critical questions you need to answer first. Question one is, “How important is the decision, really?”
Imagine you’re choosing between yellow sticky notes and blue sticky notes for the office. They’re identical except for color, and they cost the same. Should you collect data and analyze employee color preferences? Of course not. You can just pick one and move on. Some decisions simply don’t warrant analytical firepower.
Question two is, “Is the question relevant?” Let’s say you’re an HR professional deciding whether to give employees one or two days a week to work from home, aiming to improve productivity. You might ask, “How many emails are employees sending?” But stop – is this actually relevant? Email volume might reflect inefficiencies, spam floods, or pointless reply-all chains rather than genuine productivity.
A more relevant question would be, “Do employees feel more motivated when working from home?” One question measures activity noise; the other targets what you actually care about.
The third question for whether analysis is warranted is, “What is the cost of collecting the data?” Some data is simply too expensive or time-consuming to gather. Some questions call for controlled experiments rather than an online survey. But these require specialized equipment, trained personnel, and significantly more time. Other questions demand massive datasets that make sense for an international company but would be absurd for a five-person startup.
In the end, you have to remember that data is a means to an end. The decisions you make are what count. That’s what decision-driven analytics is fundamentally about – not simply amassing information, but moving purposefully toward better choices.
In this lesson to Decision-Driven Analytics by Bart de Langhe and Stefano Puntoni, you’ve learned that data analysis alone cannot provide businesses with the answers they need to make better business decisions.
Instead of starting with available data and searching for uses, decision-driven analytics flips the sequence entirely: Begin with concrete decisions you need to make, craft precise questions that help you choose between options, then collect only the specific data required to answer those questions.
Create and refine your decisions by seeking diverse perspectives and then trimming your list. Avoid “fuzzy” questions and distinguish clearly between factuals and counterfactuals. Collect the right data – not just more data – and be wary of interpretational pitfalls. And embrace uncertainty in answers, using it to refine your questions. Finally, allocate your resources by prioritizing important, relevant, and feasible enquiries, and remember that data is merely a means to an end. The decisions you make are what ultimately matter.
Comments
Post a Comment