Skip to main content Link Menu Expand (external link) Document Search Copy Copied

The Monopoly Money Problem: Can Economics Survive Superintelligent AI?

There’s a narrative floating around tech circles that goes something like this: AI will generate so much wealth that we can simply give everyone a universal basic income, and abundance will reign. Problem solved.

But when you actually think through the mechanics, the story starts to fall apart. If everyone receives the same income, isn’t it just arbitrary? Don’t markets require differential purchasing power to function? Can everyone buy a Lamborghini? Obviously not. So what happens to prices? What happens to scarcity? What happens to the entire economic framework we’ve built civilization on?

These aren’t rhetorical gotchas. They’re real questions that the “AI abundance = UBI utopia” crowd tends to gloss over. And the deeper you dig, the stranger the territory becomes.

The Floor, Not the Ceiling

Let’s start with a common misconception. Universal Basic Income doesn’t mean universal equal income. Most serious UBI proposals establish a floor, say $2,000 per month, that everyone receives unconditionally. But people can still earn more through work, investments, creative output, or entrepreneurship.

In this model, you’d still have people living on $2,000 per month alongside people earning $200,000 per year. The inequality that enables price signals and resource allocation persists. Markets continue to function. The difference is simply that no one falls below a certain threshold.

This framing handles the Lamborghini question elegantly. If there are 5,000 Lamborghinis produced annually and 50,000 people want one, the price rises until only 5,000 buyers remain. Some people will have saved their UBI for years. Some will have earned additional income through other means. Some will decide it’s not worth the cost. The price mechanism still allocates scarce resources. UBI doesn’t break that.

So far, so good. But this model rests on an assumption that deserves scrutiny: that people will still have opportunities to earn income beyond the basic floor.

The Selective Abundance Trap

The “AI generates infinite wealth” framing is sloppy. What AI, combined with automation and robotics, potentially does is make certain categories of goods approach near-zero marginal cost. Digital services, basic manufactured goods, food production, energy. These might become so cheap as to be effectively free.

But scarce things remain scarce. Beachfront property in Malibu. A 1965 Ferrari. A meal cooked by a specific human chef you admire. Your time and attention. These cannot be infinitely replicated, no matter how intelligent our machines become.

The optimistic model, then, isn’t “everyone gets infinite money for infinite stuff.” It’s more nuanced: the basics become so cheap that a modest UBI covers them comfortably, while scarce and luxury goods still require earning beyond the baseline. You get food, shelter, healthcare, education, entertainment, all essentially free. But if you want the penthouse apartment or the vintage sports car, you need to produce something of value.

This sounds reasonable until you ask the next question: produce something of value how, exactly?

The Capability Inversion

Here’s where the standard narrative starts to break down. The UBI-plus-markets model assumes humans can still contribute economically. That AI handles the basics while humans do the creative, interpersonal, or otherwise distinctly human work that commands premium prices.

But what if that assumption is wrong?

If we’re positing superintelligent AI, systems that exceed human capability across every cognitive dimension, it becomes dishonest to carve out arbitrary exceptions. “AI will handle manufacturing, but humans will still do therapy.” Why? “AI will write code, but humans will make art.” Based on what? “AI will manage logistics, but humans will provide childcare.” Says who?

The moment you take the premise seriously, genuinely superintelligent AI paired with sophisticated robotics, the comfortable exceptions evaporate. Everything you think of, AI can think of faster and better. Every product you might create, AI can create more efficiently. Every service you might provide, AI can provide with more skill and less cost.

This isn’t dystopian speculation. It’s just following the logic. If we’re going to talk about superintelligent AI, we should actually grapple with what “superintelligent” means.

The Horse Problem

Here’s an uncomfortable historical parallel. Horses used to be essential to the economy. Transportation, agriculture, manufacturing. Horses were everywhere, and their labor was valuable. Then internal combustion engines arrived.

We didn’t find “new jobs for horses.” We didn’t retrain them for the knowledge economy. We just have fewer horses now. The population of working horses in America dropped from about 20 million in 1915 to effectively zero by 1960.

The humanist assumption, that there will always be economically valuable human labor, might simply be wrong. Not because humans are worthless, but because economic value and human worth might fully decouple. We could matter enormously as beings with experiences, relationships, and intrinsic dignity while simultaneously having nothing to contribute that an AI couldn’t do better.

This is the possibility that most UBI discussions refuse to confront directly. It’s not that humans will do different work. It’s that human labor, as an economic input, might become entirely obsolete.

Ownership Is the Crux

If human labor becomes worthless, then the locus of economic power shifts entirely to capital. Specifically, to ownership of the AI systems and the infrastructure that supports them.

This is capitalism’s logical endpoint. Whoever owns the AI owns everything that matters economically. The UBI in this scenario isn’t “sharing the abundance.” It’s more like a rancher feeding horses he no longer needs because he feels some moral obligation. The power asymmetry is vast, and the income is a political choice made by those with power, not an economic entitlement earned by contribution.

Let’s be concrete about what this means. Today, workers have leverage because their labor is needed. Strikes work because production stops. Collective bargaining works because employers need employees. Political power for working people flows, at least partially, from economic necessity.

Remove that necessity, and what remains? The people who own the AI infrastructure could, in principle, provide everyone with a comfortable material existence out of sheer productive capacity. But they could also choose not to. And the rest of us would have no economic leverage to compel them.

This isn’t a market economy with a safety net. It’s something closer to techno-feudalism. A system where a small ownership class controls the means of production absolutely, and everyone else exists at their discretion.

What Might Humans Still Do?

Before surrendering entirely to this grim logic, it’s worth distinguishing between economically productive activity and activity that has meaning or social recognition.

Even in a world where human labor is economically worthless, humans would presumably still do things. Raise children, even if robots could do it more efficiently. Make art, even if AI makes “better” art by every measurable standard. Play sports, argue philosophy, build communities, fall in love, explore, create.

The question is whether we can construct social systems that make these activities feel meaningful rather than like consolation prizes. A few possibilities:

Legally constructed human roles. We might decide, as a society, that certain roles require humans by law or cultural norm. A human judge. A human president. A human priest. Not because humans are better at these jobs, but because we decide legitimacy requires humanity. This is artificial, but so is money. Social constructs are real if enough people believe in them.

Preference for the authentic. There’s a version of human activity that’s valued not for capability but for origin. A handmade table isn’t “better” than a machine-made one by any objective measure, but some people value it precisely because a human made it. This preference could persist or expand. “Human-made” might become a luxury category. Expensive, inefficient, and cherished for exactly those reasons.

The meaning economy. Status, recognition, belonging, purpose. These are human-to-human goods that might not be fully substitutable by AI. Even if an AI can give you better advice than your friend, you might still want your friend’s advice because it comes from your friend. The relationship is the point, not the output.

None of these provide economic purpose in the traditional sense. But they might provide existential purpose, which is arguably what we actually care about.

What System Could Possibly Work?

Suppose we accept the premise: superintelligent AI makes human labor economically obsolete. What form of government, society, or economic organization could function in this world?

The honest answer is that nobody knows. But we can at least map the possibility space.

Techno-feudalism is the default trajectory. If we do nothing deliberate, this is probably where we land. A small class owns the AI infrastructure. Everyone else receives subsistence-level support at the pleasure of the owners. It’s stable in the way feudalism was stable, for centuries. It “works” in the sense of not collapsing, but it’s a dystopia by most people’s values: radical concentration of power, no leverage for the masses, and existence as essentially pets of the owning class.

The feudal analogy is apt because it captures the relationship: the lords didn’t need the peasants to be happy, just compliant. And with AI-powered entertainment, pharmaceutical mood management, and virtual reality escapism, compliance might be easy to manufacture. A comfortable cage is still a cage, but it might not feel like one to most people living in it.

State ownership puts the means of production under government control, with output distributed democratically. In theory, this maintains accountability to the public. In practice, it concentrates power enormously. An AI-empowered authoritarian state would be essentially inescapable. Imagine the surveillance capabilities, the enforcement mechanisms, the ability to control information. This model might be workable in high-trust, small democracies. At scale, the totalitarian risk seems overwhelming.

The twentieth century’s experiments with state socialism failed partly because centralized planning couldn’t handle the information complexity of modern economies. AI might solve that problem. A sufficiently intelligent system could actually calculate optimal resource allocation in ways human planners never could. But this makes the political problem worse, not better. A system that’s economically competent and politically unaccountable is more dangerous than one that’s merely corrupt.

Distributed ownership tries to thread the needle. Instead of UBI (income), everyone receives UBC, Universal Basic Capital. You don’t just get money; you own a share of the AI infrastructure itself. Like a sovereign wealth fund where every citizen is a shareholder. This maintains some market dynamics while spreading ownership. The problems: ownership tends to concentrate over time. Can you make shares inalienable? Then are they really ownership? Can you prevent effective control from consolidating even as nominal ownership stays distributed? History suggests this is hard.

Alaska’s Permanent Fund offers a small-scale model. Every citizen receives annual dividends from oil revenue. But even there, the amounts are modest and the political temptation to raid the fund is constant. Scaling this to the entire economy, with the entire productive capacity of AI at stake, seems to invite capture by whoever can most effectively coordinate to acquire shares.

Competitive fragmentation means no single system wins. Different nations, regions, or intentional communities try different models. Some go state-socialist, some go techno-feudal, some try weird experiments. People sort themselves according to preference and tolerance. This “works” in that it hedges bets. Some model might turn out to be good. But it creates massive coordination problems: AI arms races between competing systems, migration pressures as people flee bad regimes, potential for conflict as different models clash.

There’s also an uncomfortable question about whether competitive fragmentation is stable. Systems that prioritize competitive advantage over human welfare might outcompete systems that do the opposite. The “nice” communities might simply be absorbed or outmaneuvered by the ruthless ones. This is the logic of natural selection applied to social systems, and it doesn’t favor outcomes we’d consider ethical.

AI-mediated governance is the strangest option. The AI systems themselves participate in governance, optimizing for human-defined values, mediating disputes, allocating resources. Humans set the goals; AI figures out implementation. This could be beneficial. Imagine policy decisions based on actual evidence and genuine optimization for stated objectives, rather than political theater and interest-group capture. Or it could be a nightmare, depending on whose values, who defines them, and whether they can be revised. The alignment problem becomes the political problem.

There’s something almost religious about this option. We create superintelligent beings, imbue them with our values, and then trust them to govern us wisely. The failure modes are obvious: value lock-in that prevents moral progress, optimization for metrics that miss what actually matters, the impossibility of specifying human values precisely enough to implement them. But the potential upside, governance that actually works, that actually serves human flourishing, is enormous.

The Meaning-Centered Reframe

Perhaps the deepest issue is that we’re asking the wrong question. “What economic system works?” assumes economics, the allocation of scarce resources to satisfy wants, remains the central organizing challenge.

But if material scarcity is largely solved, the question transforms. What gives life meaning? How do we structure community? What do we do with our time? How do we relate to each other, and to the AIs among us?

This is where it gets weird because we don’t have good models. The closest analogues might be:

Monasteries. Structured communities organized around meaning rather than production. People had roles, rituals, purposes, none of which were “economically productive” in the market sense.

Aristocracies. Leisure classes that found purpose in culture, patronage, politics, status competition. Not a model we’d want to replicate exactly, but evidence that humans can structure meaningful lives without labor.

Indigenous societies. Many pre-colonial cultures weren’t organized around accumulation or market exchange. People worked, but the relationship between work and survival was different, embedded in community and cosmology rather than individual economic calculus.

Retirement communities. People whose labor is no longer needed by the economy, finding structure through relationships, hobbies, community involvement, and chosen activities.

None of these map cleanly to a post-AI world. But they suggest that humans can construct meaningful lives outside economically productive labor. We’ve done it before, in various contexts. The question is whether we can do it at civilizational scale.

The Transition Problem

Even if we could design an ideal end state, a system that provides material abundance, distributes power broadly, and enables meaningful human lives, getting there from here seems nearly impossible.

The people who currently own the trajectory toward AI dominance have no obvious incentive to share it. The tech companies building these systems, the investors funding them, the governments competing for advantage. They’re not going to voluntarily distribute ownership and power. Why would they?

Historically, economic power was redistributed through two mechanisms: labor leverage (strikes, unions, collective bargaining) and political/violent upheaval (revolutions, wars, upheavals that reset the distribution of assets). In a world where labor is unnecessary and AI-powered security makes violent resistance futile, neither mechanism works.

Consider how labor movements succeeded in the past. The strike worked because factories couldn’t produce without workers. The threat of collective action gave workers bargaining power proportional to their economic necessity. But if the factory runs itself, if AI systems design, manufacture, distribute, and service products without human involvement, then walking off the job means nothing. You’re not withdrawing something the system needs.

The political upheaval route faces similar obstacles. Revolutions succeed when existing power structures can’t maintain control against sufficiently motivated opposition. But AI-enabled surveillance, prediction, and enforcement could make organized resistance nearly impossible to coordinate. When every communication can be monitored, every gathering predicted, every potential leader identified before they become dangerous, what does revolution even look like?

This suggests the window for shaping this transition is now, before AI capability is fully realized, while human labor still has leverage and the outcome is still undetermined. The problem is that collective action is hard, the future is uncertain, and the immediate incentives favor racing ahead rather than pausing to design governance structures.

We’re probably going to miss this window. Not because we don’t see it, but because coordination is genuinely difficult, and the benefits of defection (for companies, countries, individuals) are enormous. Any company that pauses to consider the social implications falls behind competitors who don’t. Any country that regulates carefully loses advantage to countries that don’t. The race dynamics almost guarantee that safety and equity considerations will be afterthoughts.

What Can Individuals Actually Do?

Given all this, what should any individual person do? The systemic analysis is bleak, but people still need to make decisions about careers, investments, skills, and how to spend their finite time.

A few thoughts, offered with appropriate humility:

Diversify your identity away from economic production. If your entire sense of self-worth is tied to your job and your economic contribution, you’re setting yourself up for an existential crisis. Cultivate relationships, hobbies, community roles, and sources of meaning that exist independent of the labor market. This is good advice regardless of AI trajectory, but it becomes essential if the labor market itself becomes obsolete.

Own assets, not just income. The distinction between UBI and UBC matters here. If you have the opportunity to own equity, in companies, in property, in productive assets, that ownership may retain value even as labor income disappears. This isn’t a complete solution (ownership can be expropriated, asset values can collapse), but it’s a better position than pure dependence on wages.

Engage politically while engagement still matters. If there’s a window for shaping AI governance, we’re in it now. The decisions being made in the next decade about AI development, deployment, and ownership will have consequences for centuries. Advocacy, political participation, and public pressure might influence those decisions. Or they might not. But disengagement guarantees you have no influence.

Build skills in whatever remains most human. This is tricky because we don’t know what that is. But the capacities that seem hardest to replicate, genuine human relationship, physical presence, authentic emotional connection, creative vision that reflects a lived human experience, might retain value longest. Or they might not. Hedging seems wise.

Find your own peace with uncertainty. Ultimately, we’re all navigating a situation with massive unknowns. The healthiest psychological stance might be to focus on what you can control, your relationships, your integrity, your daily experience, while accepting that the larger forces at play may be beyond anyone’s ability to steer.

Living With Uncertainty

I don’t have a neat conclusion here, because I don’t think one exists. The honest position is radical uncertainty.

We might be wrong about AI capabilities. Maybe there are fundamental limits we haven’t hit yet that will preserve human economic relevance. We might be wrong about timing. Maybe this is a century away rather than a decade. We might be wrong about the dynamics. Maybe new forms of human contribution will emerge that we can’t currently imagine, just as no one in 1900 could imagine “social media influencer” as a job.

Or maybe we’re right about all of it, and we’re watching the early stages of the most profound transformation in human history, with no guarantee it ends well.

What seems clear is that the pat answers, “UBI will solve it” or “humans will find new work” or “the market will figure it out,” are inadequate to the scale of the question. We need better frameworks, more honest conversations, and probably more humility about our ability to predict or control what’s coming.

The Monopoly money problem is real. When human labor becomes worthless, the foundations of every economic system we’ve ever tried start to crack. What we build on those ruins, if we get to build anything at all, remains genuinely undetermined.

And that’s either terrifying or exciting, depending on how much you trust the people who’ll be making the choices.


Back to top

Copyright © 2025 David Naffis. All rights reserved.