AI and Nationalism Are a Deadly Combination
If the new technology is as dangerous as its makers say, great power competition becomes suicidally reckless. Only international cooperation can ensure AI serves humanity instead of worsening war.
Dario Amodei, the CEO of leading AI company Anthropic, has written a 19,000 word warning that AI technology could spell disaster for humanity. While insisting that he and his company are developing AI responsibly, Amodei says that we are facing unprecedented risks, in part because AI is soon going to have a much greater capacity to help people and governments commit crimes against humanity. AI models, Amodei says, are getting smarter all the time, and it may soon be possible for nefarious actors to commit absolute mayhem with them, including releasing engineered pathogens, creating child sex abuse images on a massive scale, killing people with swarms of tiny drones, manipulating and blackmailing millions of people simultaneously, and more. We are, he says, at a crucial moment that will determine whether our species is capable of dealing with an exponential increase in our power to inflict cruelty and destruction, and because the technology is advancing faster than anyone expected, “we have no time to waste.”
For instance: I don’t know if you remember the COVID-19 pandemic, but a tiny virus that started out by infecting a single person soon spread across the entire world and killed seven million people. Well, thanks to AI products like the one Amodei is developing, he says that it may soon be possible for plenty of people to develop and release new deadly viruses. AI models are like having a “genius in everyone’s pocket,” “essentially making everyone a PhD virologist who can be walked through the process of designing, synthesizing, and releasing a biological weapon step-by-step.” AI might potentially tell deranged loners how to engineer weapons of mass destruction in their garages. Oh, and lest you think that powerful AI can simply be used to figure out how to stop the threat, Amodei informs us that “there is an asymmetry between attack and defense in biology, because agents spread rapidly on their own, while defenses require detection, vaccination, and treatment to be organized across large numbers of people very quickly in response.” Oh dear.
Easy access to biological weapons is only the beginning. Amodei says it’s the threat he’s most worried about, but he believes AI will confer “unimaginable power” in many domains, and some of the possibilities he outlines for our future include: massive cyberattacks of unprecedented effectiveness, governments and corporations addicting people to AI-generated propaganda and manipulating their behavior, swarms of billions of autonomous armed AI-powered drones that will decide who to kill, and a “global totalitarian dictatorship” that uses AI to create an absolute panopticon in which everything anyone ever does or says is completely accessible to the state.
[...]
Amodei is highly perturbed by the possibility of the Chinese Communist Party developing more advanced AI than the U.S., because China is “currently autocratic and operates a high-tech surveillance state.” This is part of why his company must help the U.S. government develop new weapons technology. But the argument is strange. Amodei concedes that “the Chinese people themselves” are the ones “most likely to suffer from the CCP’s AI-enabled repression.” That’s because being “authoritarian” is an internal characteristic of countries. It does not mean the country poses a threat to others.
If we were serious about assessing the threat posed by certain countries being the first to develop powerful new AI-enabled weaponry, the question we would ask is not whether the country is “autocratic,” but whether it is “aggressive,” meaning that it poses threats beyond its own borders. Yes, it will be disturbing if the Chinese government is able to use AI to further entrench its power over the population, but it’s not clear why the U.S. would fear this possibility, unless China was an aggressive country that posed a threat to us.
[...]
It worries me that a tech leader like Amodei, who is conscious of the risks his product is creating, is so ill-informed when it comes to international relations. For instance, he doesn’t talk at all about the important concept of the “security dilemma,” in which actions that one nation takes “defensively” are perceived as aggression by other nations, leading to the possibility of a spiraling arms race and an unnecessary conflict. But by emphasizing hostility toward China rather than cooperation with it, Amodei is making precisely this kind of situation more likely, and creating the very danger he says he wishes to avoid.
I have gone from someone skeptical of AI’s power to someone deeply worried about the very risks that Amodei discusses in his essay. But my fear is less of the technology itself than the fact that it is being developed by people like Amodei, who believe in (1) free market capitalism and (2) nationalism, two incredibly dangerous ideologies. Amodei’s capitalistic instincts show in his skepticism about regulating the technology (too much government regulation, he warns, will “potentially destroy economic value”) and his suggestion that the inequality caused by AI can be addressed by rich people voluntarily giving their money away (“ a large part of the way out of this economic dilemma,” he says, is the rich feeling “a strong obligation to society at large,” to which I say, fat chance). His nationalism shows in his view of China as an existential threat as opposed to simply another country comprised of human beings with the same basic desires and fears as we have.
My worry about AI is that it is being introduced in a world divided into nation-states, where too many people think as Amodei does, and see their particular country as being locked in competition with others, instead of as part of a human family that must work together through international institutions to ensure our collective survival and prosperity. Amodei does not mention the United Nations once, even though ending U.S. hostility toward it and increasing the UN’s ability to effectively regulate technology is crucial to ensuring the worst risks he discusses will not come to pass. Unfortunately, Americans in particular seem to increasingly treat international law as a dead letter, and by treating it this way they make it so, even though robust American support for international law could give it teeth and keep it from slipping into obsolescence.
So, yes, Amodei is right that we are entering a terrifying and perilous moment for our species. But what he does not realize is that his own childish view of the world, in which The Good Guys must engage in an AI arms race with the Autocrats (an arms race that, incidentally, will be very profitable for his company) heightens the risk of the worst possible outcome of all: a world war in which both sides use AI to engineer destruction on a scale that will make World War Two look like a petty skirmish.
Discuss