Is A.I. Going to Make Us Rich Or Kill Us All? Or Both?
A guide to thinking about technologies with existential risks
You are reading Economic Forces, a free weekly newsletter on economics, especially price theory, without the politics. Economic Forces arrives weekly in the inboxes of over 13,000 subscribers. You can support our newsletter by sharing this free post or becoming a paid subscriber:
If you listen to commentary on artificial intelligence, you tend to get one of two stories. The first is that A.I. is going to unlock an incredible boom in productivity. Humans, using the tools of A.I., will be substantially more productive. In addition, some have argued that at a certain point A.I. itself will start to produce new ideas on its own and this could lead to rapid increases in economic growth.
The second narrative that you hear about A.I. is some sort of doomsday scenario. For example, the rapid progress of A.I. will lead to an intelligence that equals or surpasses human intelligence. People therefore imagine scenarios in which humans create robots that are subsequently able to produce other robots without any input by humans and the robots eventually turn against humans. These types of arguments seem a lot like science fiction. However, even setting aside sci-fi-type arguments, people are justly concerned about what bad actors might do with this technology.
To the casual observer, these might just seem like two possible alternative outcomes. If so, one can simply applies probabilities to each outcome and determine how to proceed with policy and other actions. However, in reality, the probability of one outcome might be correlated with the other outcome. For example, it is not unreasonable to think that an A.I. sufficient to produce large productivity gains would also increase the existential risk associated with the technology.
If investments in A.I. increase the likelihood of both higher productivity growth and existential risk, one might think of investing in A.I. as an optimal stopping problem. In that case, one could think of investments in A.I. as part of a financial portfolio in which the owner of the portfolio generates income from investments in A.I., but also owns an option that, if exercised, would stop A.I. development (and also the growth of in income associated with further investment). Maximizing the value of that portfolio would be akin to choosing the probability of existential risk at which to stop development. That maximization would therefore automatically balance the trade-off between prosperity and existential risk.
That framing of problem, however, necessarily assumes away strategic elements. Development of A.I. is not a top down process. Anyone who understands the technology can actively work on developing the technology. People who have different subjective valuations on the technology or different degrees of risk aversion with respect to existential risks will choose different stopping times. But if one knows that others might choose a different stopping time, then that might affect their own stopping decision. In fact, in this context one definition of a “bad actor” might be someone committed to never stopping regardless of the existential risk.
Of course, the probability of rapid productivity growth and the probability of existential risk are also dependent on deliberate economic decisions. These decisions need not be binary (invest/stop), as in the case of an optimal stopping problem. One might use resources to invest in A.I. while also using resources to mitigate existential risks. After all, it seems reasonable to assume that the probability of AI posing an existential risk to society is not only impacted by the advancement of the technology, but also by other things that people are doing. Furthermore, since these are economic decisions, they involve costs. When A.I. technology is rather primitive, or society is not as rich, the cost of abstaining from investing in A.I. is potentially very costly in the sense that one is giving up a lot of potential foregone benefits. Furthermore, the cost of mitigation likely decreases (as a percentage of income or spending), the richer that society becomes.
It is important to note that there is a tendency at the current time to focus on what people perceive to be existential risks, not only with regard to A.I., but also with regard to things like climate change. Thinking about existential risk is nothing new, however. Nuclear weapons, and the existential risks they created, preoccupied social scientists for some time. Nonetheless, we might put discussions of existential risk into a broader framing centered around safety.
Economists have thought a lot about safety and safety regulation. Like anything else, the demand curve for safety is downward-sloping. Nonetheless, when it comes to issues like A.I., it is likely most important to consider the effect of income on the demand for safety. Is safety a normal or inferior good? In other words, does the demand for safety go up or down as societies become wealthier. Casual evidence seems to suggest that it is the former. Intuitively, this seems easy to understand. As societies get richer, they care more about preserving what they have than growing what they have.
There is also a question of whether safety is a luxury good. In other words, does the percentage increase in the demand for safety increase by more than the percentage increase in income itself? Again, it seems intuitive that it might be. As countries become richer, they’re likely to be substantially more concerned about safety than when they are poor.
Of course, thinking about mitigation policy is important as well. Often times, casual discussion simply assumes that policy “works.” In reality, policy creates incentives, which might or might not help with mitigation.
Thus, with the stage set, I would like to proceed first by discussing how to think about A.I. and economic growth. Given that discussion, I would then like to discuss a couple of recent papers (one by Chad Jones and the other by Leopold Aschenbrenner and Philip Trammell) that address the possibilities of rapid economic growth and existential risks imposed by A.I.
AI and Growth: What is the Mechanism?
As I have already stated, people who are optimistic about A.I. seem convinced that it is going to significant effects on economic growth. Thus, before getting into the tradeoffs associated with A.I. development, I think it is worthwhile to actually discuss why they might have reason for optimism.
The story of economic growth is really the story of figuring out how to produce more stuff with the same amount of resources. The term economic growth is therefore really just synonymous with the growth of total factor productivity. Of course, that tells us what growth is, it does not tell us what creates growth. To understand where increases in total factor productivity come from, endogenous growth theorists have focused on the market for ideas since it is really the production of new ideas (large and small) that allow people to produce more with the same amount of resources and therefore higher total factor productivity.
Thinking about economic growth in terms of ideas is a useful framing that serves as a jumping off point for thinking about a variety of factors that might influence economic growth (e.g., policy, institutions, human capital). It also produces an important insight. Malthusian logic, often observed in the press or among non-economists, tells us that population growth leads to more mouths to feed. The typical Malthusian story is that economic growth makes people more prosperous and they can therefore afford to have more children. However, with diminishing marginal returns, the increased demand for economic output will rise faster than production of that output. This will put a limit on growth.
By contrast, thinking about economic growth in terms of ideas acknowledges that each new person is a new mouth to feed, but it also points out that each new person is an additional brain that is full of new ideas. The production of these new ideas can therefore lead to changes in total factor productivity that are significant enough to offset any of these Malthusian mechanisms. In such a world, not only does population growth not restrict economic growth, but the rate of economic growth is actually increasing in the rate of population growth! In fact, this is one reason why some growth theorists have raised concerns about declining fertility rates.
Of course, this argument also implies that the production of new ideas is constrained by the number of human brains. Thus, if A.I. becomes sufficiently advanced that it is coming up with new ideas of its own or solving problems that humans haven’t previously devised solutions for, this constraint disappears and growth could increase much more rapidly than would be true with human limitations.
One can frame the potential effects of A.I. either in terms of the level or the growth rate of total factor productivity. If the use of the technology largely just enhances the productivity of humans, then society is likely to observe an effect in which the level of total factor productivity is permanently higher. This means that society would experience a temporarily higher rate of economic growth than usual until the new higher level of productivity was achieved.
On the other hand, if A.I. is producing ideas or solutions that humans wouldn’t otherwise find, then there would be a growth effect in the sense that the growth rate of the economy would be permanently higher. Since I think existential risk is more likely with a growth effect, and since that has been the focus in the small, but growing literature, I’m going to focus on that possible outcome in the discussion of existential risk.
A.I. and Existential Risk
To think about existential risk, let’s first start with the optimal stopping example. In a recent paper, Chad Jones captures the essential trade-off. In the basic framework, he assumes that consumption grows at some constant rate over time. In addition, the probability that humanity survives in a world with A.I. is decreasing over time. Expected utility is therefore a product on the probability that the world survives and the utility generated by consumption. Since he assumes that both of these growth rates are deterministic, the optimal stopping problem is simply to choose the time period in which one should stop this process.
The intuition of his main result is easy to understand. Society should let A.I. continue to advance as long as the extra growth exceeds the expected value of lost lives. All else equal, the higher the growth rate of the economy with A.I., the longer that society should continue its use. At the same time, the higher the chance that A.I. poses an existential risk to society, the sooner society should put an end to its development.
In fact, his simple model produces the following rule-of-thumb: if the ratio of the growth rate of the economy with A.I. to the probability the world ends is greater than the ratio of the statistical value of life relative to annual consumption, then society should continue to use A.I. Otherwise, A.I. should be stopped. The reason this is a useful rule of thumb is that it gives us an idea of how to evaluate the benefits relative to the costs. For example, we know from recent estimates that the current ratio of the value of a statistical life to annual consumption is about 6. Thus, this means that in order to advocate the use of A.I., one would have to argue that the growth rate of the economy with AI would be at least 6 times greater than the probability the world ends.
For example, over the last 150 years, the growth rate of real GDP per capita in the U.S. has been about 2 percent. Suppose that you think that A.I. would double that rate of economic growth to 4 percent. Jones’s rule of thumb would imply that as long as one believes that the probability that A.I. will result in the end of the world is less than 0.67%, then society should be willing to continue to use A.I..
Alternatively, one could use their beliefs about the growth rate of the economy in a world with A.I. along with their estimate of the probability that A.I. will end the world to derive a length of time in which society should allow the use of A.I. For example, Jones provides the following example. Suppose that the growth rate of the economy with A.I. would be 10 percent and the probability that A.I. destroys the world is 1 percent. This implies that society should continue to allow the use of A.I. until the ratio of the statistical value of life to annual consumption rises from its current ratio of 6 to a ratio of 10. Jones shows that if one assumes that consumes have logarithmic utility over consumption, this would imply that society should allow A.I. for approximately 40 years. Of course, if the probability that A.I. destroys the world is 2 percent, even if it would produce a 10 percent growth rate in real GDP per capita, society should take steps now to stop it.
This paper is a useful starting point for thinking about the relevant tradeoffs and issues involved in making decisions about A.I. policy and the benefits of A.I. relative to the possibly catastrophic costs. Nonetheless, it neglects a number of key factors that we might also care about, such as how economic decision-making might affect the probability of catastrophe or how people in other societies might make decisions about A.I. development. (By the way, this is not a critique of the paper. It is meant to be an attempt to take a simple model and elucidate some of the key issues and tradeoffs, not to be the definitive word on the topic.)
A paper by Leopold Aschenbrenner and Philip Trammell uses a similar objective function for thinking about the problem, but allow economic decision-making to influence the probability of catastrophe. In fact, they assume that the probability of catastrophe is increasing in the level of productivity (thus, all else equal, more A.I. development means a higher probability of catastrophe) but can be reduced through mitigation policies. Unlike Jones’s model, mitigation policies are a continuum rather than a binary choice of “let it run” or “stop it.” In Jones’s model, as the economy continues to grow, the expected loss from catastrophe also grows because the opportunity cost of catastrophe is getting larger the richer society becomes. This is why there is a finite amount of time in Jones’s model until the technology must be stopped. The intuition being that the technology makes society wealthier. At a certain point, society is wealthy enough that it no longer makes sense to impose the existential risk of A.I. because the marginal benefit of additional wealth is too small.
By similar logic, in Aschenbrenner and Trammell’s model, there is a period in which the optimal policy is to “let it run.” However, just like in Jones’s model, as the technology advances, society gets richer and the continuation value of society therefore grows over time. Since they allow mitigation to be a continuum, the end of this period is no longer a stopping time. Instead, when society gets sufficiently wealthy, it becomes optimal to engage in mitigation to limit existential risk.
What this means is that their model produces what they call an existential risk Kuznets curve, in which existential risk rises at lower levels of wealth. However, wealth eventually reaches a point beyond which the existential risk starts to decline. The reason for this is what I just described. For a sufficiently low initial value of wealth, there is an initial period when the optimal policy is no mitigation. Without mitigation, the existential risk is rising over time. But when society gets sufficiently wealthy (the continuation value of civilization gets sufficiently large), it becomes optimal to engage in mitigation. Once this occurs, the existential risk starts to decline.
In articulating this point, they reveal a result that some might find particularly counterintuitive, which is that accelerating the development of A.I. might reduce the risk that some catastrophe ever occurs, despite the fact that it increases the likelihood of catastrophe in the short-run. In fact, one might argue from there model (in a way that some might find paradoxical) that accelerating the development of the technology might help to reduce existential risk faster by reducing the interval of time in which the existential risk is rising. In other words, society could experience sufficiently high growth in such a short interval of time that people are willing to begin mitigation efforts in the near future that today do not seem worth the cost. This effect is driven by the wealth effect of the new technology in that short period of time.
In other words, this is just a basic implication of price theory. One can think of safety as a luxury good. A significant wealth effect therefore might be necessary in order increase the sufficiently demand for safety to mitigate the existential risk associated with a new technology.
Some Final Thoughts
This is a long post and there are a lot of issues to think about. Economists are just starting to take A.I. and its potential existential risks seriously. The papers I have highlighted here are great in the sense that they really help to elucidate a lot of the key issues.
In addition, they use relatively straightforward applications of price theory to draw out the relevant issues and tradeoffs associated with a new, exciting (and possibly dangerous) technology. A broader focus on the issue of safety helps to frame the discussion and make sense of the implications of their models.
One issue that deserves more attention is determining what exactly is meant by mitigation policy. The models in those papers simply assume that mitigation is effective. Jones assumes that A.I. can be stopped. Aschenbrenner and Trammell assume that mitigation simply “works.” In reality, all that mitigation policy can do is alter incentives. It becomes an open question as to whether or not the policy aligns incentives with the desired outcome.
For example, when it comes to safety, the obligatory reference here is the Peltzman effect. The famous example in car safety is the requirement that cars include seatbelts or laws that require people wear seatbelts. It is true that, all else equal, wearing a seatbelt will reduce the risk of serious injury. However, knowing that risk of serious injury is lower, people might change the way that they drive and become riskier drivers. It is important to consider what Peltzman effects might be lurking in the background in A.I. mitigation policy.
It is also important to consider counterfactuals. Several years ago, Alex Tabarrok highlighted an example from India. Several inexpensive cars in India received zero star safety ratings, which resulted in calls to stop the manufacture of these cars. However, as Tabarrok pointed out, these cars were inexpensive alternatives to motorcycles. While those cars might be substantially less safe than other cars, they were more safe than motorcycles (the post includes a photo of what looks to be a rather unsafe motorcycle ride!). If motorcycle driving is the relevant counterfactual, then even these cars with bad safety ratings are better than the alternative!
This point seems especially important with regards to A.I. mitigation policy. Such policies are likely to be enacted at the national level. As such, whether these policies are successful likely depends on what other countries decide to do. This point has been recognized and some have suggested treaties to govern A.I. development. Of course, a treaty is only worth the paper it is printed on. Each country that puts the signature on the document will find themselves in a classic prisoner’s dilemma in which they have the incentive to be the lone signatory to cheat and achieve rapid economic growth relative to the rest of the world. And it is silly to trust bad actors to honor the agreement. Only if all signatories truly share a common concern for existential risk will such agreements mean anything — and, even so, those concerns might be overwhelmed by the temptation of economic incentives.
Let’s hope that this is the start of a useful and productive conversation about what is to be done, informed by price theoretic insights.