The Next Renaissance by Zack Kass AI and the Expansion of Human Potential

What's it about?

The Next Renaissance (2025) explores how AI’s ability to deliver limitless cognitive power at near-zero cost will reshape work, health care, education, and finance. It examines the technological and societal thresholds that will determine outcomes, addressing both the promise and the costs.

Intelligence is becoming abundant. Not human intelligence, but computational power that mimics certain kinds of human thinking. The cost of running advanced AI has plummeted and, when resources shift from scarce to abundant this quickly, societies reorganize themselves.

Two thresholds – technical and social – shape what happens next. And the gap between those thresholds will define the coming decade.

This lesson dives deep into this transformation, which promises solutions to problems that have plagued humans for generations. It also demands resources at staggering scales, displaces millions from work that gives life meaning, and raises questions about what makes humans valuable when thinking becomes cheap.
The Renaissance transformed Europe between the fourteenth and seventeenth centuries, as artists rediscovered perspective, while philosophers and scientists challenged assumptions about the natural world. When the printing press was invented around 1440, it accelerated everything. Books became affordable and ideas traveled faster than ever before. Knowledge that had been locked in manuscripts for centuries was suddenly everywhere.

This period is remembered as a great leap: an example of human societies reorganizing around new capabilities. Like the steam engine in the eighteenth century. It didn’t just power factories but reshaped the landscape and transformed how people lived and moved around. Electricity in the late nineteenth century was similar. Daylight no longer limited working hours, and communication across continents happened in seconds, not weeks.

Each of these shifts followed a pattern. Something that had been scarce or expensive was suddenly abundant and cheap. The effects cascaded beyond the technology itself into how societies structured themselves and what kinds of lives seemed possible.

Artificial intelligence is following this same trajectory, but with something less tangible than steam or electricity: cognitive processing. For most of human history, complex analysis required rare expertise from specialists with years of training. Their time was expensive. Problems that needed intense mental work went unsolved if the resources weren’t there.

That constraint is dissolving fast. The economics tell the story. Running an advanced AI model cost around $60 per million processing units just months ago. Today the same work costs closer to $4. This kind of price collapse historically signals major disruption ahead, as industries reconfigure and previously impossible projects become routine.

The term for this shift is unmetered intelligence, because the parallel to electricity is remarkable. Most people don’t think about how power reaches their homes nowadays because it flows on demand. Similarly, cognitive work that once meant hiring consultants or spending days in research can now happen instantly. For things like analysis, pattern recognition, drafting documents, and mathematical modeling the shift is from specialized services to a basic utility.

Humanity has accumulated enormous amounts of knowledge. Libraries, databases, scientific papers, and historical archives all contain more information than any person could absorb in multiple lifetimes. But information is different from processing power. Human brains can only hold so much in working memory, and their attention wanders. They also make mistakes and get tired. AI removes these biological constraints.

The implications of unmetered intelligence are significant. Problems that have puzzled humans for generations may finally be solved. Like clean energy storage that makes renewable power practical everywhere, or bespoke medical treatments tailored to individual genetics. These challenges have remained because they involve more variables than human minds can easily tackle. In the age of unmetered intelligence, the analytic power to solve them is cheap and readily available.
The conversation about artificial intelligence often collapses into a simple binary: Will it help or harm? But this framing misses something crucial. The technology itself sits at the intersection of two different kinds of limits, and the gap between them matters more than either one alone.

The first limit is technical – what AI can actually do right now, and what it will be able to do soon. These capabilities are expanding rapidly. Text-to-image generation appeared just a few years ago. Now systems convert text descriptions into video, or three-dimensional models for manufacturing, even into scent profiles for perfume design.

But some technical challenges remain unsolved. The alignment problem, for instance, asks how to ensure AI systems behave as intended rather than finding unexpected shortcuts. Teaching a system what not to do turns out to be harder than teaching it what to do. Researchers work on this constantly, testing edge cases and failure modes, but the problem hasn’t been fully resolved.

The second limit is social: what will societies actually allow AI to do? This involves laws, ethical frameworks, cultural norms, and institutional policies. It asks different questions than the technical ones. It doesn’t ask, for instance, if a system can make hiring decisions, but whether it should. It doesn’t dispute that AI can diagnose some medical conditions, but under what circumstances?

These social limits are applied unevenly around the world. Some communities adopt new technologies quickly while others resist or lack access entirely. The social limit isn’t one threshold but many, set by different groups with different values and different levels of power.

The gap between these two limits creates tension. Technology races ahead while social institutions struggle to keep pace. Laws get written for previous generations of capability. Ethical guidelines address yesterday’s concerns. Public understanding lags behind current reality. This gap – between what’s technically possible and what’s socially permitted – defines the adoption period for any transformative technology.

Where this gap matters most is in who decides. Technical capabilities get developed primarily in corporate research labs and well-funded universities, mostly in wealthy countries. But the effects ripple everywhere. Communities impacted by AI rarely have meaningful input into how those systems get designed or deployed.

The ideal would be that societies collectively determine acceptable uses, but collective decision-making requires power that distributes more evenly than it does now. When someone says the societal threshold will determine outcomes, the question becomes: Whose society? The adoption gap looks entirely different depending on geography, wealth, and proximity to power.

These physical dependencies create bottlenecks that pure software innovation can’t overcome. The computational abundance promised by unmetered intelligence rests on material foundations that are quite finite.
Every major technological shift extracts a price. The factories powered by steam engines darkened skies with coal smoke. Electrification required damming rivers and stringing wire across landscapes. The question isn’t whether transformation costs something, but what specifically gets spent and who pays.

The first cost is material. Artificial intelligence infrastructure operates at scales that strain existing resources. Training a single large language model can consume electricity equivalent to what hundreds of homes use in a year. Data centers need constant cooling, which means vast amounts of water, often drawn from regions already facing water stress. The hardware itself requires rare earth minerals extracted under dangerous conditions.

A global pattern emerges: the materials needed for AI chips come from the Global South, while the energy-intensive computation happens in data centers concentrated in wealthier regions in the North. The benefits flow primarily to corporations and populations with resources to access and implement the technology. Resource extraction in one place funds computational abundance in another.

Supply chains also reveal vulnerabilities. Advanced AI chips depend on extreme ultraviolet lithography machines. Currently one company in the Netherlands manufactures these machines. This single-point dependency means that geopolitical tensions, natural disasters, or manufacturing problems could halt AI development globally. The technology might seem infinitely scalable in theory, but physical bottlenecks limit what actually gets built.

Beyond material costs lies something harder to quantify: the costs to identity and meaning. What happens to human self-understanding when work disappears? Interviews with dockworkers facing automation revealed that their primary concern wasn’t money. It was belonging, tradition, continuity with the past. Work was more than just income for them. It gave structure to days, connection to community, a sense of contributing something that mattered.

Job displacement will hit hardest in sectors where tasks can be easily automated, like clerical work, customer service, basic analysis, and routine decision-making. The standard response suggests retraining for creative or technical work. But this assumes everyone has equal access to education, time to retrain, and there will continue to be enough creative work to absorb millions of new employees.

The third major cost involves what might be called dehumanization, though the term carries more precision than it sometimes gets. Children and adults alike spend increasing hours with screens rather than with physical environments or each other. Community gathering spaces disappear when face-to-face interaction declines. The observation that humans weren’t meant to live this way isn’t nostalgic romanticism. It names a genuine loss of embodied presence and direct relationship.

Resource intensity, identity displacement, and erosion of embodied community – these costs emerge from choices about what to build and how to deploy it. They aren’t inevitable consequences of the technology itself but results of priorities embedded in development and implementation. The question is whether the abundance of wealth being created justifies what gets spent to create it.
The abstract promise of unmetered intelligence becomes concrete when examining specific areas where the technology will reshape how things work. Four domains stand out as particularly vulnerable to transformation: work, health care, education, and finance. Each represents a sector where cognitive labor drives value, and where AI can theoretically handle tasks humans currently perform.

Work changes fundamentally when cognitive tasks become cheap. Companies no longer compete primarily on how intelligent their teams are, since AI can match or exceed human analytical capacity in many areas. Competition shifts toward judgment, creativity, and relational skills. Evidence for this shift has already appeared. Teenagers publish research that would have required doctoral training a decade ago. Some companies hire directly from high school based on demonstrated capability rather than credentials. The message is clear: what someone studied matters less than how they think and relate to others.

Health care presents both dramatic possibilities and thorny complications. Personalized medicine could become genuinely accessible rather than limited to wealthy patients at elite institutions. Drug discovery might accelerate through AI simulation, identifying promising compounds faster than traditional laboratory methods. Diagnostic tools could reach communities that lack specialists, providing analysis where expertise is scarce.

Education might see the most radical transformation. Current systems optimize for standardization: same curriculum, same pace, everyone measured by identical metrics. AI enables genuine personalization, adapting to how each person learns, what captures their interest, where they struggle. Natural language interfaces remove technical literacy as a barrier. Someone who finds traditional technology frustrating can simply talk to an AI system, making the tools accessible to populations that were previously excluded.

Finance and daily life see transformation in more mundane ways, though no less significant. Tax preparation, financial planning, navigating bureaucratic systems – tasks that currently require expertise or consume hours – could become simple. This potentially frees time for other pursuits like creative work, community involvement, caring for others, or rest.

These four domains aren’t arbitrary. They reflect where questions get asked most urgently by corporate executives, policymakers, and educators at well-resourced institutions. They represent sectors where cognitive labor currently commands high value and where automation promises significant returns on investment.

But they also reveal what doesn’t get centered. Agriculture and food systems, climate adaptation, biodiversity and ecosystem restoration, preservation of indigenous knowledge – these could be transformed by AI too, but they aren’t the domains receiving equivalent attention or investment. The selection reveals priorities: which problems are considered worth solving, which transformations seem valuable enough to pursue.

Domain selection shows what matters to those with resources to develop and deploy technology. Work, health care, education, and finance are important areas affecting billions of lives. But they’re also areas where AI serves industries and institutions in wealthy nations. The domains that might help communities live harmoniously with their environments, or preserve knowledge systems, receive less focus. Not because they’re less important, but because they’re less central to the concerns of those funding the technology.
After examining promises and costs, technical capabilities and social limits, the question becomes practical: How should people actually navigate this transformation? Four principles emerge as guideposts, each addressing a different aspect of life in an age where cognitive work becomes automated.

The first principle is simple and literal: go outside. Spend time in physical spaces, in weather, in environments that aren’t mediated by screens. This isn’t recreational advice, but a recognition of something being lost. As cognitive work moves online and AI handles more analytical tasks, the pull toward constant interface with technology intensifies. Counterbalancing that pull requires deliberate investment in embodied presence. Community spaces, parks, places where people gather without devices, these become infrastructure worth protecting and building.

The second principle to remember is, be human. Cultivate skills that resist automation because they emerge from embodied experience and relational context. Emotional intelligence, moral reasoning, aesthetic judgment, humor, vulnerability, trust. As cognition commodifies, these qualities become valued. More than that, they’re what makes life meaningful. The point isn’t just maintaining competitive advantage but remembering what matters beyond productivity.

The third principle addresses learning itself: learn how to learn. As AI transforms the workplace, adaptability matters more than focused expertise. Curiosity becomes a survival skill. Critical thinking remains valuable precisely because it becomes optional – when AI handles analytical work, some will choose not to think critically. But capability also expands. When anyone can access powerful cognitive tools, genius becomes more democratically available. The divide shifts from who has knowledge to who has curiosity and judgment.

The fourth principle is perhaps most important: lead with optimism. Not naive hope that everything will work out, but strategic conviction that outcomes depend on choices being made right now. The stance rejects both techno-utopianism, where AI solves everything without human effort, and doomerism – where disaster is inevitable. It relies on agency. The future gets shaped by decisions about what to build, how to deploy it, who benefits, and what costs are acceptable.

These principles assume certain privileges. Safety to go outside, for instance. Or access to opportunities for learning and self-development. But within those constraints, they offer a practical orientation. They’re mostly addressed to people who have influence over how AI develops and deploys, like executives, policymakers, educators, and technologists.

What is yet to be seen is whether these principles will be sufficient. Whether protecting human qualities while automating human cognition can resolve the tensions. Ultimately, the tensions may point toward questions the playbook doesn’t ask. Questions about what intelligence means, whose abundance matters, what kinds of relationship between human and non-human worlds the technology might serve. Those remain vital for everyone to consider.
In this lesson to The Next Renaissance by Zack Kass, you’ve learned that the cost of advanced AI has plummeted, making cognitive work that once required rare expertise available on demand like electricity.

This shift promises breakthroughs in health care, education, and scientific research. But the transformation extracts real prices: massive energy and water consumption, rare mineral extraction, job displacement, and the erosion of embodied human connection.

Four principles offer guidance in this transformative age: go outside, be human, learn how to learn, and lead with optimism. Yet questions remain about whose intelligence counts, whose abundance this serves, and what the living world can sustain.

Comments

Popular posts from this blog

The Prince and the Pauper: A Tale of Two Mirrored Fates by Mark Twain

Lessons from the Book 📖 New Great Depression

lessons from. the book 📖 Alexander Hamilton