Invisible Rivals by Jonathan R. Goodman How We Evolved to Compete in a Cooperative World
What's it about?
Invisible Rivalry (2025) explores the interplay between human cooperation and competition. Drawing on multiple disciplines including biology and anthropology, it argues that rather than being purely cooperative or competitive, human motivation is a blend of both. It suggests we address our tendency toward self-interest to help create the best society we can.
Do you consider yourself more cooperative or competitive? Selfish or altruistic? What drives us to be generous in some contexts but calculating in others?
Think about the last time you split a dinner bill with a friend. Did you pull out your phone’s calculator app to tally your exact share? Or did you wave it off and suggest an even split, even though technically they ordered more?
In this lesson, we’ll look at the many varieties of human selfishness and altruism. We’ll find that most of us display a mix of the two. In fact, we’re strategic cooperators, wired to adapt our behavior based on our context, relationships, and incentives. We share freely in some moments and become meticulous accountants in others.
We’ll also learn how culture shapes our tendency to cooperate – or not – and how we can design institutions to steer our imperfect natures towards the common good.
Let’s begin.
In 1975, mathematical biologist George Price took his own life. He was found in a London squat, surrounded by a group of homeless men he had befriended. Price had been a respected academic – he’d developed groundbreaking equations explaining how altruism evolves in biological systems
But Price didn't just theorize about altruism – he lived it to the extreme. He gave away his possessions, opened his home to anyone who needed shelter. Eventually, he joined the ranks of the homeless he was helping. Even as he slept rough, Price continued publishing academic papers – a brilliant mind wrestling with the implications of his own discoveries.
Price’s remarkable story suggests a deep question: just how altruistic are humans, really? Most of us, of course, exist somewhere far short of Price’s extreme. We care, yes. But we navigate day-by-day between a concern for others and for ourselves.
This tension between cooperation and competition isn’t unique to us humans. Throughout the animal kingdom, both behaviors exist side by side. Vampire bats, for example, share blood meals with hungry roost-mates when food is scarce. Yet these same bats will compete fiercely for the best roosting spots.
Likewise, chimpanzees form complex alliances to overthrow dominant males – but then turn on former allies when power shifts. Even bacteria engage in something like cooperation, through the creation of biofilms, even as they engage in chemical warfare against competitors.
Back in the realm of human behavior, debates on human nature have raged in the social sciences. For decades, economics promoted the model of Homo economicus. Humans, in this view, are calculating, self-interested actors working to maximize their personal gain. It’s a view that has shaped everything from market theory to public policy.
But behavioral research shows a truth that’s more nuanced: humans also sometimes act against their immediate self-interest. We give to charity and pay our taxes. We follow rules even if no one’s watching. We even risk our lives as soldiers and firefighters.
So, biologically-informed theorists instead proposed another model: Homo reciprocans. Humans, in this view, are conditional cooperators. We're not purely selfish nor purely altruistic. Rather, we’re beings whose cooperation depends on context, relationships, and expectations of mutual benefit. But for such a cooperative system to be stable, it first needs to solve the problem of unchecked aggression. How did early humans manage to tame our most aggressive instincts?
In most of the animal kingdom, physical dominance is the name of the game – the alpha gets first pick of food, mates, and territory. This raises a more specific question than we just explored. Our focus now shifts from the fact that humans cooperate to the puzzle of how we developed that capacity in the first place. What stopped the biggest and strongest individuals from just taking everything, making widespread cooperation impossible?
According to Harvard anthropologist Richard Wrangham, the answer resides in a crucial turning point in our evolutionary history. As human civilizations developed, hotheads who couldn’t control their impulses faced severe consequences. They were punished, exiled, or executed by the group – effectively removing them from the gene pool. This created what Wrangham calls selection against reactive aggression.
We can see this concept play out with how wolves were domesticated. Because it turns out early humans didn’t actually set out to breed dogs. Rather, friendlier canines tended to get a chunk of meat thrown their way, while aggressive ones were driven away or killed. The human preference for good behavior created a gradual evolutionary pressure towards the gentle beasts we have today.
Wrangham’s point is that this doesn’t stop with dogs. The tendency for human societies to punish aggressors also favored those people who could regulate their aggression. In essence, Homo sapiens domesticated itself.
One big factor here was the development of stone tools. Suddenly, physical size mattered less. A smaller, weaker person armed with a stone axe could now overcome someone bigger and stronger. This technological shift created a more level playing field, nudging early humans towards more equitable social relationships.
Once things became more equal, there was naturally less exploitation and more cooperation and sharing. And while it looks different from place to place, sharing is a core part of basically every human society. Hunter-gatherers only secure meat from time to time, so group survival depends on sharing after each successful hunt. That’s why hunter-gatherers typically show such strong, unwritten rules about how the food gets divided up.
It’s a practice that researchers call risk pooling. A fascinating example comes from the Rossel Islanders of Melanesia. This isolated community of 3,400 people has dealt with the same threat of devastating cyclones that hit every few years. They all work together to maintain cyclone shelters across their island. When a storm hits, people huddle together. When the storm is over, they emerge, survey the damage, and share whatever resources remain.
But this doesn't mean humans will share with just anyone. Anthropological studies consistently find that while people donate resources to others, they care a lot about who receives them. Relationships matter. In other words, while people share with others generously, they don't share for the good of all. Rather, they share with those they happen to like.
This selective generosity perfectly illustrates the tension between our cooperative and competitive instincts. We’ve evolved mechanisms for sharing and collaboration, but these operate within careful boundaries of kinship and mutual benefit.
The picture of human cooperation painted so far might seem almost heartwarming – people sharing resources, pooling risks, working together for mutual benefit. But it’s not the whole story.
Early in the 20th century, anthropologist John Moore took a harder look at this rosy picture of peaceful, equitable nomads. His review of anthropological research revealed something disconcerting. In 11 out of 15 hunter-gatherer societies he reviewed, older men systematically exploited women and younger group members. So much for universal cooperation!
So, what determines whether we exploit or cooperate? The answer lies in understanding what makes humans so different from other animals. When most species spread to new geographic areas, they face a stark choice: adapt their bodies over generations – often becoming a new subspecies – or die out.
Take a baby from any human group on Earth and drop them into a completely different environment, and they’ll learn from their peers what it takes to thrive there. This is why, compared to other animals, humans have such a long childhood. Not only are we growing physically, we’re absorbing massive amounts of cultural information.
This human specialty is called acculturation, the ability to soak up cultural knowledge from our social group and use it to survive under local conditions. For Homo sapiens, acculturation has been an evolutionary jackpot, one that’s allowed us to colonize practically every environment on the planet, without having to change our biology.
The specific way a particular society acquires resources in its particular niche is what the author calls our mode of production. But here’s the catch: for members of a hyper-social species like ours, the environment isn’t just trees and rivers and weather patterns. Our environment is also social – it’s each other. So, alongside the mode of production for getting things from nature, we have what’s called a mode of exploitation. This is all about how individuals get resources from each other.
From an evolutionary perspective, exploitation is a tried and tested strategy. Why work hard gathering food when you can get others to gather it for you? It’s a winning strategy, if you can pull it off. But if force and aggression are off the table, what options are left?
In human societies, one of the main ways is through social status. Those who are seen as admirable or important can receive more than they give, often in ways that feel perfectly normal to those involved. High-status individuals get better resources, more opportunities, and more chances to pass on their genes. And here’s the twist: the very same cultural learning that allows us to cooperate so well is what creates the very hierarchies that enable these sorts of exploitation.
The result is what the author calls invisible rivalry, this uniquely human dynamic of sharing, cooperating, and seeking personal advantage, all at the same time.
It turns out that gaining status isn't the only way humans get ahead. There’s another, much darker strategy: simple deception.
While we might think of lying as a uniquely human flaw, it's actually a strategy found all throughout nature – even at the cellular level. Cancer cells, for example, masquerade as healthy tissue, evading the immune system by mimicking the chemical signatures of legitimate cells. If we imagine the body as a society, cancer cells are the cheaters and free-loaders. They grab resources meant for the collective good while contributing nothing in return.
Such deception occurs all throughout the animal kingdom. Dolphins mimic the calls of absent pod members – essentially committing identity theft. Ravens have been observed giving false alarm calls when food is nearby. Once their competitors scatter, they swoop in to claim the prize themselves.
Our closest relatives, chimpanzees, are particularly good at this. They hide food from groupmates, sometimes succeeding brilliantly, sometimes getting caught red-handed. A chimp might fake injury to distract rivals from a hidden food cache. Another might learn to approach valuable resources only when competitors aren’t watching.
Humans have, of course, inherited this capacity for deception. Some of us more than others. Psychopaths, for instance, can be devastatingly effective exploiters. They mimic cooperative behavior while manipulating others into giving them what they want. In a world with air travel and 8 billion humans, psychopaths can roam freely, leaving trails of destruction in their wake. Their combination of Machiavellian intelligence and lack of empathy makes them particularly dangerous to cooperative systems.
So with all this temptation to take from others, how do humans manage to cooperate in the first place? Let’s now zoom in on the specific psychological tools we all carry that make cooperation a viable strategy
The first and most fundamental tool is our instinct to help family. It’s a principle called kin selection, and it explains why we’ll risk everything for our children or siblings but might not do the same for a stranger. The mathematician William Hamilton formalized this with a simple equation: our willingness to help someone is based on the benefit that help will give them multiplied by our genetic relatedness. A parent sacrificing for a child, siblings sharing resources, extended families supporting one another, these all help to maximize the spread of our genes.
But humans, notably, cooperate far beyond our family circles. This is where reciprocity enters the picture. “You scratch my back, and I’ll scratch yours” is the basis for a great deal of trade and coordination. A group of people helping each other, even for selfish reasons, has enormous advantages over individuals going alone. The tricky part, though, is figuring out who you can actually trust to return the favor.
One framework social scientists use to model these dilemmas is game theory. Imagine two deer hunters make an agreement: if you share your best hunting spots with me, I’ll share mine with you. That way, we can both catch more deer than we ever could alone.
But as we begin sharing information, each of us faces a choice. “Do I share everything or hold back a little?” If I keep my word and tell all, while you break yours and hold back, you’ll get more deer than ever, while I get stiffed. But if both of us, fearing this, hold back from each other – well, we’re no better off than where we started.
This quandary – when to be loyal and when to try getting more for yourself – is known as a prisoner’s dilemma, and many consider it an essential dynamic in human and animal cooperation.
So what's the best way to act in these situations? Computer models simulating thousands of such encounters reveal a surprisingly effective strategy called tit for tat. The rule is simple: you start by cooperating, and on every following turn, you just copy what the other person did last. If they cooperate, you cooperate. If they betray, well, then you betray. Be nice first, but don't let yourself be a pushover. This approach rewards cooperation and punishes betrayal, creating the stable, trusting partnerships that allow societies to thrive.
After all this, one thing should be clear: human nature isn’t simple. We aren’t wired to be purely selfish or purely noble. As we’ve seen, some cultures emphasize egalitarian sharing, while others embrace rigid hierarchies. Some prioritize group harmony, while others celebrate individual achievement.
This incredible flexibility is our greatest advantage – it allows our social norms to evolve to match local challenges. But it’s also what makes designing a good society so difficult. If there's no single blueprint for how to treat each other, where do we even begin?
One answer is to get better at using our social radar, that is, learning to distinguish between authentic cooperation and self-serving calculation. This means paying less attention to what people say and more to what they do, consistently, over time. When evaluating people in positions of power, for example, their track record of past actions and their network of allegiances often matter more than their public statements. In other words, when it comes to reading intentions, actions speak louder than words. And repeated actions speak louder than isolated ones.
Of course, formally punishing norm-breakers is another tool. Every society uses punishment to some degree. But it has its limits. Sophisticated exploiters often respond to punishment by simply updating their methods. This creates expensive cycles where enforcement systems must continuously evolve to counter new strategies.
A more powerful tool is often reputation. Because social standing is such a valuable resource, the threat of public exposure can be a massive deterrent. For an academic caught faking data, the career-ending shame is often a far greater punishment than any official fine. However, reputation-based systems require fair and robust norms to remain credible. Otherwise, they risk becoming merely another tool of manipulation.
In the end, we can’t engineer human selfishness out of our nature – that was never a realistic option. The goal, instead, is to design systems with a clear-eyed view of who we really are. By understanding our complex social instincts, we can create environments where it becomes more difficult to exploit others – and more rewarding to cooperate. We can build institutions that channel even our self-interest toward outcomes that benefit everyone.
The main takeaway of this lesson to Invisible Rivals by Johnathan R. Goodman is that humans are strategic cooperators whose behavior depends on context.
We evolved both to share and to exploit. And while human altruism is real, it coexists with manipulation and status-seeking. This combination is best represented by our mode of production, which reflects how we get resources from nature, and our mode of exploitation – how people get resources from each other.
Our goal, then, should be to design societies and institutions that reward our cooperative instincts while making exploitation costly. Yes, we can’t make perfect people – but we can build smart systems that bring out the best human nature has to offer.
Invisible Rivalry (2025) explores the interplay between human cooperation and competition. Drawing on multiple disciplines including biology and anthropology, it argues that rather than being purely cooperative or competitive, human motivation is a blend of both. It suggests we address our tendency toward self-interest to help create the best society we can.
Do you consider yourself more cooperative or competitive? Selfish or altruistic? What drives us to be generous in some contexts but calculating in others?
Think about the last time you split a dinner bill with a friend. Did you pull out your phone’s calculator app to tally your exact share? Or did you wave it off and suggest an even split, even though technically they ordered more?
In this lesson, we’ll look at the many varieties of human selfishness and altruism. We’ll find that most of us display a mix of the two. In fact, we’re strategic cooperators, wired to adapt our behavior based on our context, relationships, and incentives. We share freely in some moments and become meticulous accountants in others.
We’ll also learn how culture shapes our tendency to cooperate – or not – and how we can design institutions to steer our imperfect natures towards the common good.
Let’s begin.
In 1975, mathematical biologist George Price took his own life. He was found in a London squat, surrounded by a group of homeless men he had befriended. Price had been a respected academic – he’d developed groundbreaking equations explaining how altruism evolves in biological systems
But Price didn't just theorize about altruism – he lived it to the extreme. He gave away his possessions, opened his home to anyone who needed shelter. Eventually, he joined the ranks of the homeless he was helping. Even as he slept rough, Price continued publishing academic papers – a brilliant mind wrestling with the implications of his own discoveries.
Price’s remarkable story suggests a deep question: just how altruistic are humans, really? Most of us, of course, exist somewhere far short of Price’s extreme. We care, yes. But we navigate day-by-day between a concern for others and for ourselves.
This tension between cooperation and competition isn’t unique to us humans. Throughout the animal kingdom, both behaviors exist side by side. Vampire bats, for example, share blood meals with hungry roost-mates when food is scarce. Yet these same bats will compete fiercely for the best roosting spots.
Likewise, chimpanzees form complex alliances to overthrow dominant males – but then turn on former allies when power shifts. Even bacteria engage in something like cooperation, through the creation of biofilms, even as they engage in chemical warfare against competitors.
Back in the realm of human behavior, debates on human nature have raged in the social sciences. For decades, economics promoted the model of Homo economicus. Humans, in this view, are calculating, self-interested actors working to maximize their personal gain. It’s a view that has shaped everything from market theory to public policy.
But behavioral research shows a truth that’s more nuanced: humans also sometimes act against their immediate self-interest. We give to charity and pay our taxes. We follow rules even if no one’s watching. We even risk our lives as soldiers and firefighters.
So, biologically-informed theorists instead proposed another model: Homo reciprocans. Humans, in this view, are conditional cooperators. We're not purely selfish nor purely altruistic. Rather, we’re beings whose cooperation depends on context, relationships, and expectations of mutual benefit. But for such a cooperative system to be stable, it first needs to solve the problem of unchecked aggression. How did early humans manage to tame our most aggressive instincts?
In most of the animal kingdom, physical dominance is the name of the game – the alpha gets first pick of food, mates, and territory. This raises a more specific question than we just explored. Our focus now shifts from the fact that humans cooperate to the puzzle of how we developed that capacity in the first place. What stopped the biggest and strongest individuals from just taking everything, making widespread cooperation impossible?
According to Harvard anthropologist Richard Wrangham, the answer resides in a crucial turning point in our evolutionary history. As human civilizations developed, hotheads who couldn’t control their impulses faced severe consequences. They were punished, exiled, or executed by the group – effectively removing them from the gene pool. This created what Wrangham calls selection against reactive aggression.
We can see this concept play out with how wolves were domesticated. Because it turns out early humans didn’t actually set out to breed dogs. Rather, friendlier canines tended to get a chunk of meat thrown their way, while aggressive ones were driven away or killed. The human preference for good behavior created a gradual evolutionary pressure towards the gentle beasts we have today.
Wrangham’s point is that this doesn’t stop with dogs. The tendency for human societies to punish aggressors also favored those people who could regulate their aggression. In essence, Homo sapiens domesticated itself.
One big factor here was the development of stone tools. Suddenly, physical size mattered less. A smaller, weaker person armed with a stone axe could now overcome someone bigger and stronger. This technological shift created a more level playing field, nudging early humans towards more equitable social relationships.
Once things became more equal, there was naturally less exploitation and more cooperation and sharing. And while it looks different from place to place, sharing is a core part of basically every human society. Hunter-gatherers only secure meat from time to time, so group survival depends on sharing after each successful hunt. That’s why hunter-gatherers typically show such strong, unwritten rules about how the food gets divided up.
It’s a practice that researchers call risk pooling. A fascinating example comes from the Rossel Islanders of Melanesia. This isolated community of 3,400 people has dealt with the same threat of devastating cyclones that hit every few years. They all work together to maintain cyclone shelters across their island. When a storm hits, people huddle together. When the storm is over, they emerge, survey the damage, and share whatever resources remain.
But this doesn't mean humans will share with just anyone. Anthropological studies consistently find that while people donate resources to others, they care a lot about who receives them. Relationships matter. In other words, while people share with others generously, they don't share for the good of all. Rather, they share with those they happen to like.
This selective generosity perfectly illustrates the tension between our cooperative and competitive instincts. We’ve evolved mechanisms for sharing and collaboration, but these operate within careful boundaries of kinship and mutual benefit.
The picture of human cooperation painted so far might seem almost heartwarming – people sharing resources, pooling risks, working together for mutual benefit. But it’s not the whole story.
Early in the 20th century, anthropologist John Moore took a harder look at this rosy picture of peaceful, equitable nomads. His review of anthropological research revealed something disconcerting. In 11 out of 15 hunter-gatherer societies he reviewed, older men systematically exploited women and younger group members. So much for universal cooperation!
So, what determines whether we exploit or cooperate? The answer lies in understanding what makes humans so different from other animals. When most species spread to new geographic areas, they face a stark choice: adapt their bodies over generations – often becoming a new subspecies – or die out.
Take a baby from any human group on Earth and drop them into a completely different environment, and they’ll learn from their peers what it takes to thrive there. This is why, compared to other animals, humans have such a long childhood. Not only are we growing physically, we’re absorbing massive amounts of cultural information.
This human specialty is called acculturation, the ability to soak up cultural knowledge from our social group and use it to survive under local conditions. For Homo sapiens, acculturation has been an evolutionary jackpot, one that’s allowed us to colonize practically every environment on the planet, without having to change our biology.
The specific way a particular society acquires resources in its particular niche is what the author calls our mode of production. But here’s the catch: for members of a hyper-social species like ours, the environment isn’t just trees and rivers and weather patterns. Our environment is also social – it’s each other. So, alongside the mode of production for getting things from nature, we have what’s called a mode of exploitation. This is all about how individuals get resources from each other.
From an evolutionary perspective, exploitation is a tried and tested strategy. Why work hard gathering food when you can get others to gather it for you? It’s a winning strategy, if you can pull it off. But if force and aggression are off the table, what options are left?
In human societies, one of the main ways is through social status. Those who are seen as admirable or important can receive more than they give, often in ways that feel perfectly normal to those involved. High-status individuals get better resources, more opportunities, and more chances to pass on their genes. And here’s the twist: the very same cultural learning that allows us to cooperate so well is what creates the very hierarchies that enable these sorts of exploitation.
The result is what the author calls invisible rivalry, this uniquely human dynamic of sharing, cooperating, and seeking personal advantage, all at the same time.
It turns out that gaining status isn't the only way humans get ahead. There’s another, much darker strategy: simple deception.
While we might think of lying as a uniquely human flaw, it's actually a strategy found all throughout nature – even at the cellular level. Cancer cells, for example, masquerade as healthy tissue, evading the immune system by mimicking the chemical signatures of legitimate cells. If we imagine the body as a society, cancer cells are the cheaters and free-loaders. They grab resources meant for the collective good while contributing nothing in return.
Such deception occurs all throughout the animal kingdom. Dolphins mimic the calls of absent pod members – essentially committing identity theft. Ravens have been observed giving false alarm calls when food is nearby. Once their competitors scatter, they swoop in to claim the prize themselves.
Our closest relatives, chimpanzees, are particularly good at this. They hide food from groupmates, sometimes succeeding brilliantly, sometimes getting caught red-handed. A chimp might fake injury to distract rivals from a hidden food cache. Another might learn to approach valuable resources only when competitors aren’t watching.
Humans have, of course, inherited this capacity for deception. Some of us more than others. Psychopaths, for instance, can be devastatingly effective exploiters. They mimic cooperative behavior while manipulating others into giving them what they want. In a world with air travel and 8 billion humans, psychopaths can roam freely, leaving trails of destruction in their wake. Their combination of Machiavellian intelligence and lack of empathy makes them particularly dangerous to cooperative systems.
So with all this temptation to take from others, how do humans manage to cooperate in the first place? Let’s now zoom in on the specific psychological tools we all carry that make cooperation a viable strategy
The first and most fundamental tool is our instinct to help family. It’s a principle called kin selection, and it explains why we’ll risk everything for our children or siblings but might not do the same for a stranger. The mathematician William Hamilton formalized this with a simple equation: our willingness to help someone is based on the benefit that help will give them multiplied by our genetic relatedness. A parent sacrificing for a child, siblings sharing resources, extended families supporting one another, these all help to maximize the spread of our genes.
But humans, notably, cooperate far beyond our family circles. This is where reciprocity enters the picture. “You scratch my back, and I’ll scratch yours” is the basis for a great deal of trade and coordination. A group of people helping each other, even for selfish reasons, has enormous advantages over individuals going alone. The tricky part, though, is figuring out who you can actually trust to return the favor.
One framework social scientists use to model these dilemmas is game theory. Imagine two deer hunters make an agreement: if you share your best hunting spots with me, I’ll share mine with you. That way, we can both catch more deer than we ever could alone.
But as we begin sharing information, each of us faces a choice. “Do I share everything or hold back a little?” If I keep my word and tell all, while you break yours and hold back, you’ll get more deer than ever, while I get stiffed. But if both of us, fearing this, hold back from each other – well, we’re no better off than where we started.
This quandary – when to be loyal and when to try getting more for yourself – is known as a prisoner’s dilemma, and many consider it an essential dynamic in human and animal cooperation.
So what's the best way to act in these situations? Computer models simulating thousands of such encounters reveal a surprisingly effective strategy called tit for tat. The rule is simple: you start by cooperating, and on every following turn, you just copy what the other person did last. If they cooperate, you cooperate. If they betray, well, then you betray. Be nice first, but don't let yourself be a pushover. This approach rewards cooperation and punishes betrayal, creating the stable, trusting partnerships that allow societies to thrive.
After all this, one thing should be clear: human nature isn’t simple. We aren’t wired to be purely selfish or purely noble. As we’ve seen, some cultures emphasize egalitarian sharing, while others embrace rigid hierarchies. Some prioritize group harmony, while others celebrate individual achievement.
This incredible flexibility is our greatest advantage – it allows our social norms to evolve to match local challenges. But it’s also what makes designing a good society so difficult. If there's no single blueprint for how to treat each other, where do we even begin?
One answer is to get better at using our social radar, that is, learning to distinguish between authentic cooperation and self-serving calculation. This means paying less attention to what people say and more to what they do, consistently, over time. When evaluating people in positions of power, for example, their track record of past actions and their network of allegiances often matter more than their public statements. In other words, when it comes to reading intentions, actions speak louder than words. And repeated actions speak louder than isolated ones.
Of course, formally punishing norm-breakers is another tool. Every society uses punishment to some degree. But it has its limits. Sophisticated exploiters often respond to punishment by simply updating their methods. This creates expensive cycles where enforcement systems must continuously evolve to counter new strategies.
A more powerful tool is often reputation. Because social standing is such a valuable resource, the threat of public exposure can be a massive deterrent. For an academic caught faking data, the career-ending shame is often a far greater punishment than any official fine. However, reputation-based systems require fair and robust norms to remain credible. Otherwise, they risk becoming merely another tool of manipulation.
In the end, we can’t engineer human selfishness out of our nature – that was never a realistic option. The goal, instead, is to design systems with a clear-eyed view of who we really are. By understanding our complex social instincts, we can create environments where it becomes more difficult to exploit others – and more rewarding to cooperate. We can build institutions that channel even our self-interest toward outcomes that benefit everyone.
The main takeaway of this lesson to Invisible Rivals by Johnathan R. Goodman is that humans are strategic cooperators whose behavior depends on context.
We evolved both to share and to exploit. And while human altruism is real, it coexists with manipulation and status-seeking. This combination is best represented by our mode of production, which reflects how we get resources from nature, and our mode of exploitation – how people get resources from each other.
Our goal, then, should be to design societies and institutions that reward our cooperative instincts while making exploitation costly. Yes, we can’t make perfect people – but we can build smart systems that bring out the best human nature has to offer.
Comments
Post a Comment