About the author
Follow @AEIdeas on Twitter
What’s New on AEI
Sign up for AEI Today
A Nobel laureate economist explains how AI could bring back the age of Malthus
My bias — as long-time AEIdeas readers know — leans toward techno-optimism. While there are always trade-offs in life, I am generally excited about what scientists, technologists, and entrepreneurs will next bring to humanity. One of my blog posts yesterday, for instance, did a deep dive into new research suggesting AI and robotics won’t destroy the human job market.
But I am not going to ignore the other side of the trade. And for that perspective, I offer “Artificial Intelligence and Its Implications for Income Distribution and Unemployment” by Anton Korinek and Nobel laureate Joseph Stiglitz, in which the economists sketch out how AI agents, whether actual artificial lifeforms or AI-enhanced humans, could return humanity overall to a new age of Malthus where “super‐intelligent entities are likely to command a growing share of the scarce resources in the economy, pushing regular humans below their subsistence level.” From the paper (with key bits in bold by me):
In the beginning, those lacking the skills that are useful in an AI‐dominated world may find that they are increasingly at a disadvantage in competing for scarce resources, and they will see their incomes decline, as we noted earlier. The proliferation of AI entities will at first put only modest price pressure on scarce resources, and most of the scarce factors are of relatively little interest to humans (such as silicon), so humanity as a whole will benefit from the high productivity of AI entities and from large gains from trade. From a human perspective, this will look like AI leading to significant productivity gains in our world. Moreover, any scarce factors that are valuable for the reproduction and improvement of AI, such as human labor skilled in programming, or intellectual property, would experience large gains.
As time goes on, the superior production and consumption technologies of AI entities imply that they will proliferate. Their ever‐increasing efficiency units will lead to fierce competition over any non‐reproducible factors that are in limited supply, such as land and energy, pushing up the prices of such factors and making them increasingly unaffordable for regular humans, given their limited factor income. It is not hard to imagine an outcome where the AI entities, living for themselves, absorb (i.e. “consume”) more and more of our resources.
Eventually, this may force humans to cut back on their consumption to the point where their real income is so low that they decline in numbers. Technologists have described several dystopian ways in which humans could survive for some time — ranging from uploading themselves into a simulated (and more energy‐efficient) world to taking drugs that reduce their energy intake. The decline of humanity may not play out in the traditional way described by Malthus — that humans are literally starving — since human fertility is increasingly a matter of choice rather than nutrition. It is sufficient that a growing number of unenhanced humans decide that given the prices they face, they cannot afford sufficient offspring to meet the human replacement rate while providing their offspring with the space, education, and prospects that they aspire to.
One question that these observations bring up is whether it might be desirable for humanity to slow down or halt progress in AI beyond a certain point. However, even if such a move were desirable, it may well be technologically infeasible — progress may have to be stopped well short of the point where general artificial intelligence could occur. Furthermore it cannot be ruled out that a graduate student under the radar working in a garage will create the world’s first super‐human AI.
If progress in AI cannot be halted, our description above suggests mechanisms that may ensure that humans can afford a separate living space and remain viable: because humans start out owning some of the factors that are in limited supply, if they are prohibited from transferring these factors, they could continue to consume them without suffering from their price appreciation.
This would create a type of human “reservation” in an AI‐dominated world. Humans would likely be tempted to sell their initial factor holdings, for two reasons: First, humans may be less patient than artificially intelligent entities. Secondly, super‐intelligent AI entities may earn higher returns on factors and thus willing to pay more for them than other humans. That is why, for the future of humanity, it may be necessary to limit the ability of humans to sell their factor allocations to AI entities. Furthermore, for factors such as energy that correspond to a flow that is used up in consumption, it would be necessary to allocate permanent usage rights to humans. Alternatively, we could provide an equivalent flow income to humans that is adjusted regularly to keep pace with factor prices.
Certainly fascinating speculation. And now we know Bill Gates isn’t the only person out there musing about the possible need for robot taxes. But I think the analysis suffers from a lack of imagination. For instance, if we are talking about super-intelligence, aren’t we also talking about a time of extraordinary abundance that could be shared with all? A Malthusian world is one of scarcity. Would there really, for instance, be a class of humans without the means to enhance themselves in such a world?
The same goes for some of the constraints the authors refer to, such as energy. Might not “AI entities” devise new ways to power the world? Indeed, such a point is made in a footnote: “At present, humans consume only a small fraction — about 0.1% — of the energy that earth receives from the sun. By harvesting energy from beyond earth, even greater supplies of energy would be available.” And that’s solar power. If you are talking about super-intelligence, then why not also theoretical power sources such as zero point energy?