topicsfeedssearch

Review of If Anyone Builds It, Everyone Dies

LessWrong Curated · Feb 19, 01:56 · original ↗
Published on February 19, 2026 1:53 AM GMT

Crosspost of my blog article.

Over the past five years, we’ve seen extraordinary advancements in AI capabilities, with LLMs going from producing nonsensical text in 2021 to becoming people’s therapists and automating complex tasks in 2025. Given such advancement, it’s only natural to wonder what further advancement in AI could mean for society. If this technology’s intelligence continues to scale at the rate it has been, it seems more likely than not that we’ll see the creation of the first truly godlike technology, a technology capable of predicting the future like an oracle and of ushering in an industrial revolution like we’ve never seen before. If such a technology were made, it could usher in an everlasting prosperity for mankind or it could enable a small set of the rich and powerful to have absolute control over humanity’s future. Even worse, if we were unable to align such a technology with our values, it could seek out goals different from our own and try to kill us in the process of trying to achieve them.

And, yet, despite the possibility of this technology radically transforming the world, most discourse around AI is surprisingly shallow. Most pundits talk about the risk of job loss from AI or the most recent controversy centering around an AI company’s CEO rather than what this technology would mean for humanity if we were truly able to advance it.

This is why, when I heard that Eliezer Yudkowsky and Nate Soares’ book If Anyone Builds It, Everyone Dies was going to come out, I was really excited. Given that Yudkowsky is the founder of AI safety and has been working in the field for over twenty years, I expected that he’d be able to write a foundation text for the public’s discourse on AI safety. I thought, given the excitement of the moment and the strength of Yudkowsky’s arguments, that this book could create a major shift in the Overton window. I even thought that, given Yudkowsky and Soares’ experience, this book would describe in great detail how modern AI systems work, why advanced versions of these systems could pose a risk to humanity, and why current attempts at AI safety are likely to fail. I was wrong.

Instead of reading a foundational text on AI safety, I read a poorly written and vague book with a completely abstract argument about how smarter than human intelligence could kill us all. If I had to explain every reason I thought this was a bad book, we’d be here all day so instead I’ll just offer three criticisms of it:

1. The Book Doesn’t Argue Its Thesis

In the introduction to the book, the authors clearly bold an entire paragraph so as to demarcate their thesis—“If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything like the present understanding of AI, then everyone, everywhere on Earth, will die.”

Given such a thesis, you would expect that the authors would do the following:

  1. Explain how modern AI systems work
  2. Explain how scaled up versions of modern AI systems could pose an existential risk
  3. Offer examples of current flaws with AI systems that give us good reason to think that scaled up versions would be threatening to humanity
  4. Explain why current approaches to AI safety are deeply flawed
  5. Explain how an AI system could actually kill everyone

Instead, the authors do the following:

  1. Give an extremely brief description of how current AI systems work
  2. Make a vague argument that AI systems will develop preferences that are misaligned with human values
  3. Argue that, in order to satisfy these preferences, AI systems will want to kill everyone
  4. Argue that AI systems, which have these preferences (and are orders of magnitude better than humans across all domains), would kill everyone
  5. Explain how an AI system could kill everyone
  6. Make vague criticisms of modern AI safety without discussing any serious work in the field

Considering what the authors actually wrote, their thesis should have been, “If an artificial intelligence system is ever made that is orders of magnitude better than humans across all domains, it will have preferences that are seriously misaligned with human values, which will cause it to kill everyone. Also, for vague reasons, the modern field of AI safety won’t be able to handle this problem.”

Notably, this thesis is much weaker and much different than the thesis that they actually chose.

2. The Book Doesn’t Make A Good Foundation For A Movement

Considering that the authors are trying to get 100,000 people to rally in Washington DC to call for “an international treaty to ban the development of Artificial Superintelligence,” it’s shocking how little effort they put into explaining how AI systems actually work, what people are currently doing to make them safe, or even addressing basic counter arguments to their thesis.

If you asked someone what they learned about AI from this book, they would tell you that AIs are made of trillions of parameters, that AIs are black boxes, and that AIs are “grown not crafted.” If you pressed them about how AIs are actually created or how that specific creation process could cause AIs to be misaligned, they wouldn’t be able to tell you much.

And, despite being over 250 pages long, they barely even discuss what others in the field of AI safety are trying to do. For instance, after devoting an entire chapter to examples of CEOs not really taking AI safety seriously, they only share one example of how people are trying to make AI systems safe.

Lastly, the authors are so convinced that their argument is true that they barely attempt to address any counterarguments to it such as:

  1. Current AI systems seem pretty aligned. Why should we expect this alignment to go away as AI systems become more advanced?
  2. Current AI systems rely heavily on reinforcement learning from human feedback, which seems to cause AI systems to be pretty aligned with human values. Why would this approach fail as AI systems become more advanced?
  3. AI safety researchers are currently trying approach X. Why would this approach fail?
  4. If AI systems became seriously mis-aligned, wouldn’t we notice this before they became capable of causing human extinction?
  5. Why should we expect AI systems to develop bizarre and alien preferences when virtually all biological organisms have extremely normal preferences? (For instance, humans like to eat ice cream, but they don’t like to eat, as you mention, jet engine fuel.)

3. The Crux of Their Argument Is Barely Justified

Lastly, the core crux of their argument, that AI systems will be seriously mis-aligned with human values no matter how they are trained, is barely justified.

In their chapter “You Don’t Get What You Train For,” they make the argument that, similar to how evolution has caused organisms to have bizarre preferences, the training process for AI systems will cause them to have bizarre preferences too. They mention, for instance, that humans developed a taste for sugar in their ancestral environment, but, now, humans like ice cream even though ice cream wasn’t in their ancestral environment. They argue that this pattern will extend to AI systems too, such that no matter what you train them to prefer, they will ultimately prefer something much more alien and bizarre.

To extend analogy about evolution to AI systems, they write,

  1. “Gradient descent—a process that tweaks models depending only on their external behaviors and their consequences—trains an AI to act as a helpful assistant to humans.
  2. That blind training process stumbles across bits and pieces of mental machinery inside the AI that point it toward (say) eliciting cheerful user responses, and away from angry ones.
  3. But a grownup AI animated by those bits and pieces of machinery doesn’t care about cheerfulness per se. If later it became smarter and invented new options for itself, it would develop other interactions it liked even more than cheerful user responses; and would invent new interactions that it prefers over anything it was able to find back in its “natural” training environment.”

They justify this argument with a few vague examples of how this misalignment could happen and then re-state their argument, “The preferences that wind up in a mature AI are complicated, practically impossible to predict, and vanishingly unlikely to be aligned with our own, no matter how it was trained.”

For this to be the central crux of their argument, it seems like they should have given it a whole lot more justification, such as, for instance, examples of how this kind of misalignment has already occurred. Beyond the fact that we’re capable of simulating the evolution of lots of preferences, their argument isn’t even intuitively true to me. If we’re training something to do something, it seems far more natural to me to assume that it will have a preference to do that thing rather than to do something vastly different and significantly more harmful.

Conclusion

I was really hoping for this book to usher in positive change for how people talk about the existential risks of AI, but instead I was sorely disappointed. If you want to see a more clear-headed explanation about why we should be concerned about AI, I’d recommend checking out 80,000 Hours’ article “Risks from power-seeking AI systems.”



Discuss