For decades, Nick Bostrom, philosopher, AI theorist and director of the Future of Humanity Institute at Oxford University, has been warning the world that superintelligence could be humanity’s final invention. Not because it is the end goal—but because it might be the end, period.
Bostrom now brings fretful optimism, as he calls it, and philosophical clarity to questions that are no longer theoretical in his latest book, Deep Utopia: Life and Meaning in a Solved World, which comes a decade after his pathbreaking Superintelligence (2014).
In Deep Utopia, Bostrom explores what life might look like in a future where superintelligent AI reverses ageing, creates boundless wealth and makes human labour and scarcity obsolete. If everything is taken care of, he asks, what gives our lives meaning? In a conversation with Kanika Saxena over Zoom and email, Bostrom outlines what the stakes are in this critical moment in AI’s evolution. Edited excerpts:
You have spent years warning that superintelligence could be humanity’s last invention. How nervous are you about where we are today?
Nervous but also excited. I might describe myself as a ‘fretful optimist’, or alternatively as a ‘moderate fatalist’ on this topic. There are many ways we could screw this up, each one quite plausible. But in my opinion, it would also be a tragedy, maybe an even worse one, if we failed to ever unlock this next level.
Where do you think the first superintelligent AI will come from—a big tech lab with deep pockets, or some teenager’s laptop? Meta has pledged $15 billion towards building ‘superintelligence’. Is that enough to create something transformative?
It will come from a big tech lab. At least one that has access to billions of dollars of compute, although the number of engineers might not need to be that big. Moves like Meta’s give them another shot. If they hadn’t done something drastic like this, they would (at best) have remained in the second division.
So many AI models are described as ‘intelligent’ but often they are just really good at predicting the next word. Are we a bit too dazzled by clever autocomplete?
Much of what we humans do also seems to be clever autocomplete! You might need a large foundation of pattern recognition and interpolation as a cognitive basis upon which you might then do a bit of creative reasoning and insightful work. We will probably begin to see more of the latter in AIs as well, as the predominantly supervised learning regime of the early LLMs (which basically consist of learning a compressed representation of the entire body of text on the internet) is supplemented with longer-horizon reinforcement learning, reasoning, lifelong learning during inference time, and other methods.
You have argued that AI could outsmart us in ways we can’t predict. What is the scenario that keeps you up at night—some clever system quietly gaming its objectives, or something more sudden and obvious?
By the time it’s obvious, it might be too late. But misaligned superintelligence is only one of the several things I worry about. Another is the possibility that we humans will use superintelligence (even if we figure out how to steer them) to harm or oppress one another. Yet another is that we will not treat the digital minds that we create as well as we should. In the future, most sentient minds or minds with moral status will be digital, so it is ethically important that the world is a happy place for them as well as for biological humans and animals. And, finally, there’s also the question of how the superintelligent beings we create will get along with other superintelligent beings: this is paramount but is currently less understood.
Some people say that throwing billions at AI, without much thought for guardrails, accelerates progress. Is the money flowing faster than wisdom can catch up?
Wisdom is slow, quiet, hard to measure and not good at selling. Yet, I think, we stand in great and increasing need of it—especially as we begin to approach the intelligence explosion. I have been working to contribute to this for the past three decades. For the first 20 years, it was regarded as totally fringe, outside a tiny circle of collaborators. In the past 10 years, there has been a huge swell in the number of brilliant minds finally paying attention to it. The leading AI labs have research teams specifically trying to solve superintelligence alignment. Anthropic even has a person entirely focused on the welfare of digital minds (may there be many more!). Governments, for better or worse, have begun to come out with statements and policies in the AI space. So, I would say that wisdom in the AI field has been growing remarkably fast in recent years. However, the wisdom deficit might still have widened, given how much technical capabilities have been advancing during the same period.
Fast forward 20 years: are we living in a sci-fi utopia, a dystopian cautionary tale, or a world that looks suspiciously like 2025—just with better chatbots?
I would be surprised if it was the last. Both utopia and dystopia seem more plausible. In the case of dystopia, it wouldn’t be a cautionary tale because it would be too late to learn from it. In the case of utopia, it could be extremely wonderful, but it would involve a profound transformation in what it means to be human. I explore the prospect of deep utopia in my recent book, which is unfortunately a pretty difficult read.
What excites you most about the future of AI, and what makes you want to pull the emergency brake?
We should mostly focus on steering rather than braking. A big part of what excites me is the removal of negatives—the massive, ongoing, terrible, unwanted suffering all around the world, and also disease and ageing. But there is also the possibility of new positives: blissful modes of being, with enhanced capacities and ways of enjoying everyday life far beyond what we can currently even dream of. I am hoping people will one day look back upon the present era and shake their heads in shock and disbelief that anybody could live as we do now—even the most fortunate among us.
kanika.saxena1@timesofindia.com
Bostrom now brings fretful optimism, as he calls it, and philosophical clarity to questions that are no longer theoretical in his latest book, Deep Utopia: Life and Meaning in a Solved World, which comes a decade after his pathbreaking Superintelligence (2014).
In Deep Utopia, Bostrom explores what life might look like in a future where superintelligent AI reverses ageing, creates boundless wealth and makes human labour and scarcity obsolete. If everything is taken care of, he asks, what gives our lives meaning? In a conversation with Kanika Saxena over Zoom and email, Bostrom outlines what the stakes are in this critical moment in AI’s evolution. Edited excerpts:
You have spent years warning that superintelligence could be humanity’s last invention. How nervous are you about where we are today?
Nervous but also excited. I might describe myself as a ‘fretful optimist’, or alternatively as a ‘moderate fatalist’ on this topic. There are many ways we could screw this up, each one quite plausible. But in my opinion, it would also be a tragedy, maybe an even worse one, if we failed to ever unlock this next level.
Where do you think the first superintelligent AI will come from—a big tech lab with deep pockets, or some teenager’s laptop? Meta has pledged $15 billion towards building ‘superintelligence’. Is that enough to create something transformative?
It will come from a big tech lab. At least one that has access to billions of dollars of compute, although the number of engineers might not need to be that big. Moves like Meta’s give them another shot. If they hadn’t done something drastic like this, they would (at best) have remained in the second division.
So many AI models are described as ‘intelligent’ but often they are just really good at predicting the next word. Are we a bit too dazzled by clever autocomplete?
Much of what we humans do also seems to be clever autocomplete! You might need a large foundation of pattern recognition and interpolation as a cognitive basis upon which you might then do a bit of creative reasoning and insightful work. We will probably begin to see more of the latter in AIs as well, as the predominantly supervised learning regime of the early LLMs (which basically consist of learning a compressed representation of the entire body of text on the internet) is supplemented with longer-horizon reinforcement learning, reasoning, lifelong learning during inference time, and other methods.
You have argued that AI could outsmart us in ways we can’t predict. What is the scenario that keeps you up at night—some clever system quietly gaming its objectives, or something more sudden and obvious?
By the time it’s obvious, it might be too late. But misaligned superintelligence is only one of the several things I worry about. Another is the possibility that we humans will use superintelligence (even if we figure out how to steer them) to harm or oppress one another. Yet another is that we will not treat the digital minds that we create as well as we should. In the future, most sentient minds or minds with moral status will be digital, so it is ethically important that the world is a happy place for them as well as for biological humans and animals. And, finally, there’s also the question of how the superintelligent beings we create will get along with other superintelligent beings: this is paramount but is currently less understood.
Some people say that throwing billions at AI, without much thought for guardrails, accelerates progress. Is the money flowing faster than wisdom can catch up?
Wisdom is slow, quiet, hard to measure and not good at selling. Yet, I think, we stand in great and increasing need of it—especially as we begin to approach the intelligence explosion. I have been working to contribute to this for the past three decades. For the first 20 years, it was regarded as totally fringe, outside a tiny circle of collaborators. In the past 10 years, there has been a huge swell in the number of brilliant minds finally paying attention to it. The leading AI labs have research teams specifically trying to solve superintelligence alignment. Anthropic even has a person entirely focused on the welfare of digital minds (may there be many more!). Governments, for better or worse, have begun to come out with statements and policies in the AI space. So, I would say that wisdom in the AI field has been growing remarkably fast in recent years. However, the wisdom deficit might still have widened, given how much technical capabilities have been advancing during the same period.
Fast forward 20 years: are we living in a sci-fi utopia, a dystopian cautionary tale, or a world that looks suspiciously like 2025—just with better chatbots?
I would be surprised if it was the last. Both utopia and dystopia seem more plausible. In the case of dystopia, it wouldn’t be a cautionary tale because it would be too late to learn from it. In the case of utopia, it could be extremely wonderful, but it would involve a profound transformation in what it means to be human. I explore the prospect of deep utopia in my recent book, which is unfortunately a pretty difficult read.
What excites you most about the future of AI, and what makes you want to pull the emergency brake?
We should mostly focus on steering rather than braking. A big part of what excites me is the removal of negatives—the massive, ongoing, terrible, unwanted suffering all around the world, and also disease and ageing. But there is also the possibility of new positives: blissful modes of being, with enhanced capacities and ways of enjoying everyday life far beyond what we can currently even dream of. I am hoping people will one day look back upon the present era and shake their heads in shock and disbelief that anybody could live as we do now—even the most fortunate among us.
kanika.saxena1@timesofindia.com