
The Coming Wave: Complete Summary of Mustafa Suleyman’s Blueprint for Navigating Humanity’s Greatest Dilemma
Introduction: What This Book Is About
“The Coming Wave” is essential reading for anyone interested in the future of humanity, urging not just awareness but immediate, collective action. It offers a persuasively argued tour de force from a leading industry expert, aiming to shape our view of the future and inspire us to sculpt it responsibly. The book is for technologists, policymakers, and the general public alike, emphasizing that while the wave is coming, its final form is still to be decided by our choices today.
Mustafa Suleyman, co-founder of DeepMind and Inflection AI, presents a sobering yet urgent call to action in “The Coming Wave.” This book explores the existential dangers posed by rapidly advancing technologies, particularly Artificial Intelligence (AI) and synthetic biology, arguing that humanity stands at a critical turning point. Suleyman details how these uncontainable technologies promise unprecedented benefits but also present profound risks that could lead to catastrophic or dystopian outcomes.
The book is a wake-up call from an insider, providing a comprehensive guide to understanding how these technological shifts will rewire our present and shape our future. It addresses the paradox of containing uncontainable technologies and challenges the prevalent “pessimism aversion” that prevents society from confronting these realities head-on. Readers will gain a clear-eyed perspective on the history of radical technological change and the deep political and societal challenges that lie ahead.
Related top book summaries:
- Continuous Discovery Habits – Complete Book Summary & All Key Ideas
- Product management theater; Marty Cagan interview
- Inspired – Complete Book Summary & All Key Ideas
- Crossing The Chasm – Complete Book Summary & All Key Ideas
- Transformed – Complete Book Summary & All Key Ideas
Chapter 1: Containment Is Not Possible
This introductory chapter sets the stage by immediately confronting the core thesis: containment of the coming technological wave is not possible under current conditions. Suleyman begins by drawing parallels to historical flood myths and the unstoppable forces of nature, framing technological proliferation as an equally powerful and often uncontainable force. The chapter introduces the concept of Homo technologicus—humanity as an inherently technological species, evolving in symbiosis with its tools.
It highlights the unprecedented scale of transformation that AI and synthetic biology promise, capable of engineering intelligence and life itself. This section establishes the central dilemma of the book: humanity’s future is both dependent on and imperiled by these technologies. The author shares his personal journey from idealism to a growing concern about the “pessimism-aversion trap,” where individuals, especially elites, dismiss or downplay serious warnings about technological risks.
The Inevitable Trajectory of Technology
Technology follows a single, seemingly immutable law: it gets cheaper and easier to use, leading to widespread proliferation. This mass diffusion in great roiling waves is the central theme. From fire and stone tools to the internet and smartphones, foundational technologies inevitably spread far and wide. The internal combustion engine serves as a prime example, starting as an expensive novelty in the 19th century and becoming globally ubiquitous by the 20th century. This proliferation is driven by insatiable demand and falling costs, creating a self-reinforcing cycle of improvement and adoption.
- Ubiquitous nature: Almost every object in our line of sight is created or altered by human intelligence.
- Relentless evolution: Invention is a sprawling, emergent process driven by competitive individuals.
- Economic driver: Demand for better products and services pushes competition and lowers costs.
- Inescapable nature: Technology’s inherent character defaults to expansion.
The Central Dilemma of the Coming Wave
The author explains the dilemma: the growing likelihood that both the presence and absence of new technologies might lead to catastrophic or dystopian outcomes. With AI and biotechnology, humanity gains godlike powers of creation, yet faces unprecedented risks. The core problem is that these technologies are uniquely disruptive and potentially uncontainable. The book argues that if we fail to manage this wave wisely, it may destroy us.
- Potential benefits: Unlocking secrets of the universe, curing diseases, creating new art forms, transforming agriculture.
- Potential dangers: Creating systems beyond our control, being at the mercy of algorithms, unintended consequences for ecosystems.
- Consequences of failure: Unimaginable disruption, instability, and catastrophe on a global scale.
- The “narrow path”: Striking a balance between openness and closure to avoid catastrophic or dystopian outcomes.
Confronting Pessimism Aversion
Suleyman introduces the concept of pessimism aversion, defining it as the tendency for people, especially elites, to ignore, downplay, or reject narratives they see as overly negative. He recounts instances where his warnings about AI’s potential to cause massive invasions of privacy, misinformation apocalypses, or job displacement were met with blank stares and dismissals. The author shares a chilling anecdote from a seminar where the potential for a single person to “kill a billion people” using synthetic pathogens was similarly dismissed. This widespread emotional and intellectual avoidance, he argues, is a “trap” that prevents society from taking necessary action.
- Dismissal of warnings: Experiences with tech leaders ignoring warnings about AI threats.
- Unwillingness to confront: Observing discomfort and denial when discussing catastrophic risks like synthetic pathogens.
- Misguided analysis: Pessimism aversion stems from a fear of dark realities.
- Innate physiological response: Humanity is not wired to grapple with transformation at this scale.
The Urgency of Containment
The chapter concludes by emphasizing the urgent need for “containment”, defined as the ability to monitor, curtail, control, and potentially even close down technologies. Suleyman stresses that the current discourse on technology ethics and safety is inadequate because it rarely addresses the concept of hard containment. He asserts that while he is an optimist by nature, the potential for technology to go “net negative” is a real and growing concern. The chapter sets up the overarching argument: containing this wave seems impossible under current conditions, yet for all our sakes, containment must be possible.
- Inadequate current discourse: Lack of focus on hard containment in technology debates.
- Definition of containment: Interlocking technical, social, and legal mechanisms to control technology.
- Personal conviction: The author’s belief that technology could sharply move to a “net negative” impact.
- Call to action: Acknowledging the dilemma and working towards solutions despite perceived impossibility.
Chapter 2: Endless Proliferation
This chapter delves into the historical rhythm of technological waves, emphasizing the unstoppable, uncontainable nature of technology’s spread. It posits that proliferation is the default state for any foundational technology, regardless of initial skepticism or resistance. The narrative begins with the transformative impact of the internal combustion engine and its application in the automobile, tracing its evolution from a niche, expensive invention to a globally ubiquitous mode of transport.
Suleyman explains that waves are clusters of technologies centered on one or more general-purpose technologies (GPTs), which profoundly alter human capabilities and societal structures. He stresses that humanity and technology are symbiotic, evolving together. The chapter also introduces the concept of “turbo-proliferation” through the lens of computing, showcasing how advancements like Moore’s Law led to an unprecedented speed and scale of technological diffusion, impacting nearly every aspect of modern life.
The Historical Rhythm of General-Purpose Waves
Human history can be told through a series of technological waves, starting with stone tools and fire. These early technologies, acting as proto-general-purpose technologies, profoundly impacted human evolution, such as fire allowing for cooking and brain enlargement. Subsequent GPTs like language, agriculture, and writing formed the bedrock of civilization, permeating societies and enabling further inventions. The chapter highlights that new tools and techniques foster larger, more connected populations, which in turn become potent “collective brains” for further innovation. The Industrial Revolutions accelerated this dynamic, compressing centuries of change into decades through innovations like steam power, mechanized looms, and electricity. This process is neither orderly nor predictable; waves erratically intersect and intensify, laying the groundwork for successive, even faster waves.
- Early waves: Stone tools and fire allowed for more efficient hunting and brain enlargement, fostering communities.
- Foundational GPTs: Language, agriculture, and writing laid the groundwork for civilization.
- Population and innovation: New tools lead to larger populations, which in turn drive greater specialization and invention.
- Accelerated change: The Industrial Revolutions (1770s onwards) compressed millennia of change into decades, with the last 100 years seeing seven major GPTs emerge.
Proliferation as Technology’s Default
The chapter firmly establishes that mass diffusion is technology’s historical default. Technologies, once they gain traction, become almost impossible to stop. The example of Gutenberg’s printing press illustrates this: from a single press in 1440, a thousand spread across Europe within fifty years, leading to an explosive multiplication of books and a 340-fold decrease in price. Similarly, electricity expanded rapidly from its debut in 1882 to become a dominant energy source. Consumer technologies like the telephone and television followed comparable patterns of decreasing prices and increasing adoption. This proliferation is fueled by demand and cost decreases, which constantly drive technology to be better and cheaper. The process is a cumulative, compounding feedback loop, where new technologies enable even newer, cheaper ones, echoing through history from ancient flint tools to modern AI models.
- Printing press diffusion: From one press in 1440 to a thousand across Europe in 50 years, causing a 340-fold price decrease in books.
- Electricity adoption: From first power stations in 1882 to powering a transformed economy by 1950, increasing “lumen-hours” by 438,000 times.
- Consumer electronics trend: Telephone adoption surged from 600,000 in 1900 to 5.8 million in 1910 in America, with TVs costing $1,000 in 1950 now costing $8.
- Catalyst for proliferation: Driven by demand and resulting cost decreases, leading to new and cheaper downstream technologies.
From Vacuum Tubes to Nanometers: Turbo-Proliferation
The chapter uses the digital revolution as a prime example of turbo-proliferation, a hint of what’s coming next. Computing began in academic labs during World War II with projects like Bletchley Park and the ENIAC. The invention of the transistor in 1947 by Bell Labs laid the foundation for the digital age. Despite early skepticism (IBM’s president allegedly thought there was a world market for only five computers), Gordon Moore’s Law—the doubling of transistors on a chip every 24 months—led to a ten-million-fold increase in transistors per chip since the early 1970s. This exponential growth underpinned the flowering of devices and applications like smartphones and the internet, leading to billions of connected devices and an explosion of data. This unprecedented speed and scale of diffusion demonstrates what “pure, uncontained technological proliferation” looks like, serving as a template for the coming wave.
- Early computing: Bletchley Park and ENIAC were precursors, followed by the transistor in 1947.
- Moore’s Law impact: Transistors per chip increased tenfold over 50 years, with power improving seventeen-billion-fold.
- Unprecedented proliferation: Tens of trillions of transistors produced per second, at billionths of a dollar each.
- Digital ubiquity: Number of connected devices reached 14 billion, with data increasing 20 times from 2010 to 2020.
Chapter 3: The Containment Problem
This chapter directly addresses the “containment problem”: the inherent difficulty in controlling and limiting technology once it’s introduced into the world. Suleyman argues that while technology offers immense benefits, it also generates unpredictable “revenge effects” and unintended consequences, leading to a complex, dynamic system beyond its creators’ immediate control. The author asserts that containing technology requires a fundamental program that balances power between humans and their tools, extending beyond mere regulation to include technical safety, cultural norms, and legal mechanisms.
Suleyman explores historical attempts to “say no” to technology, such as the Ottoman Empire’s ban on the printing press or the Luddites’ resistance to industrial machinery. He concludes that such efforts were largely futile, as demand and the relentless nature of technological spread ultimately prevailed. The chapter then examines the nuclear exception as a rare instance of partial containment, analyzing the factors that allowed it to buck the trend, while also highlighting its inherent flaws and persistent risks, demonstrating that even the most contained technologies are far from truly controlled.
Understanding Revenge Effects and Unintended Consequences
Technology exists in a complex, dynamic system, where its impacts ripple out unpredictably. “Revenge effects” refer to unintended consequences that often directly contradict a technology’s original purpose, such as prescription opioids causing dependence or CFCs creating a hole in the ozone layer. The author notes that as technology proliferates, more people use and adapt it, leading to chains of causality beyond individual comprehension. This inherent unpredictability means that even well-intentioned inventions can go catastrophically wrong. The core of the problem is that technology’s makers quickly lose control over the path their inventions take.
- Loss of control: Makers quickly lose control over the path their inventions take, leading to unpredictable outcomes.
- Direct contradictions: Thomas Edison’s phonograph intended for thought recording, but used for music; Alfred Nobel’s explosives for mining, not war.
- Environmental impacts: Fridge makers didn’t intend to create ozone holes with CFCs; car creators didn’t foresee global warming.
- Escalating harms: As tool power and access grow, so do potential harms, creating an unfolding labyrinth of consequences.
Defining and Establishing Containment
Containment is defined as the overarching ability to monitor, curtail, control, and potentially even close down technologies at any stage of their development or deployment. It encompasses regulation, better technical safety, new governance and ownership models, and enhanced accountability and transparency. The goal is to steer technological waves to ensure their impact aligns with human values and avoids significant harm. Suleyman clarifies that containment is not about eliminating all negative impacts but about maintaining meaningful control to prevent catastrophic outcomes. This requires a “fundamental program” that operates at technical, cultural, legal, and political levels, serving as a necessary prerequisite for humanity’s survival in the 21st century.
- Technical containment: Air gaps, sandboxes, off switches, built-in safety, and security measures in labs.
- Cultural and values-based containment: Norms around creation and dissemination that support boundaries and vigilance for harm.
- Legal and political containment: National regulations and international treaties for control.
- Escalating need: As technology’s power and societal integration increase, the need for containment becomes more acute.
Historical Attempts to “Say No” to Technology
Throughout history, various societies and groups have attempted to resist new technologies, but these efforts have generally failed. The Ottoman Empire’s ban on Arabic printing lasted nearly three centuries but eventually succumbed to the inevitable spread of the technology. The Luddites, who violently rejected industrial techniques like the flying shuttle, were unable to stop the advance of mechanized looms. Other examples include Pope Urban II’s attempt to ban the crossbow and Queen Elizabeth I’s veto of a knitting machine. These instances demonstrate that despite strong reasons for resistance, such as threats to livelihoods or fears of disruption, “where there is demand, technology always breaks out.” Inventions cannot be uninvented or indefinitely blocked; knowledge, once gained, inevitably spreads.
- Ottoman Empire’s printing ban: Tried to ban Arabic printing for nearly 300 years, but eventually adopted.
- Medieval resistance: Pope Urban II banned the crossbow; Queen Elizabeth I rejected a knitting machine.
- Luddite movement: Violently resisted industrial machinery, but ultimately failed to stop its proliferation.
- National rejections: 17th-century Japan shut out the world; 18th-century China dismissed Western tech.
The Nuclear Exception: A Flawed Success Story
The containment of nuclear weapons is presented as a rare and partial exception to the general trend of technological proliferation. Despite their strategic advantage and destructive power (as demonstrated by the Tsar Bomba), only nine countries have acquired them, and no non-state actors are known to possess them. This containment was achieved through a conscious non-proliferation policy, notably the 1968 Treaty on the Non-proliferation of Nuclear Weapons, driven by both the horrific power of the weapons (mutually assured destruction) and their prohibitive expense and complexity to manufacture. However, Suleyman emphasizes that even this “success” is far from reassuring, marred by numerous accidents, near misses, and the constant threat of “loose warheads” or software malfunctions, demonstrating that even the most contained technologies are far from truly controlled.
- Limited proliferation: Only nine countries have acquired nuclear weapons, and South Africa even relinquished them.
- Conscious policy: The 1963 Partial Test Ban Treaty and 1968 Nuclear Non-proliferation Treaty aimed to arrest spread.
- Cost and complexity: Incredibly expensive and difficult to manufacture, lacking widespread demand for cost reduction.
- Persistent risks: History of accidents (1961 B-52, faulty computer chips), near misses (Cuban Missile Crisis), and unaccounted material.
Chapter 4: The Technology of Intelligence
This chapter marks the beginning of Part II, focusing on AI as the central general-purpose technology of the coming wave. Suleyman recounts his personal “aha!” moment with AI, witnessing the Deep Q-Network (DQN) algorithm teach itself to play Atari’s Breakout with uncanny, superhuman strategies. This breakthrough, alongside the later triumph of AlphaGo over world champion Lee Sedol, solidified the author’s conviction that AI was no longer science fiction but a transformative reality.
The chapter details how technology is shifting from manipulating atoms to controlling bits and genes, foundational information currencies. It describes the “Cambrian explosion” of innovation, where AI and synthetic biology intersect, buttress, and boost each other. Suleyman addresses skepticism about AI hype, arguing that current progress is underestimated and accelerating, particularly with the rise of large language models (LLMs) like ChatGPT. He challenges the distraction of the “Singularity” debate, redirecting focus to the near-term “Artificial Capable Intelligence” (ACI) and the implications of a “Modern Turing Test,” where AI can autonomously achieve complex real-world goals.
The Birth of AI: DQN and AlphaGo
Suleyman vividly recalls the moment AI became real for him in 2012, watching DeepMind’s DQN algorithm learn to play Breakout. DQN demonstrated self-learning, discovering a non-obvious “tunneling” strategy that allowed it to achieve maximum scores with minimum effort. This electrifying breakthrough showed AI’s capacity to discover new knowledge. The public turning point came with AlphaGo’s victory over Go world champion Lee Sedol in 2016. Despite initial skepticism from experts, AlphaGo’s “move 37” redefined Go strategy, proving that AI could achieve superhuman insights in complex domains. Later versions, like AlphaZero, trained from scratch, further demonstrated AI’s ability to learn more than human experience could teach it in just days. These events heralded a new age of AI, inspiring massive investment and research.
- DQN’s breakthrough: Learned to play Breakout by itself, discovering a clever tunneling strategy to maximize score.
- AlphaGo’s Go triumph: Beat world champion Lee Sedol 4–1 in 2016, with “move 37” rewriting Go strategy.
- AlphaZero’s self-learning: Later versions learned Go from scratch, surpassing original AlphaGo without human input in days.
- Public impact: AlphaGo’s victory was broadcast live to millions, signaling a new age of AI.
From Atoms to Bits to Genes: The Technological Phase Transition
Historically, technology focused on manipulating atoms, from fire to microchips. However, a phase transition began in the mid-20th century with the realization that information (bits and genes) is a core property of the universe. The coming wave is built on AI and synthetic biology, which directly address intelligence and life. These higher-order technologies allow for extraordinary control of the material world, creating a “fizzing cycle of cross-catalyzing, cross-cutting, and expanding capability.” AI enables the replication of speech, language, and reasoning, while synthetic biology allows us to sequence, modify, and print DNA. Surrounding these core GPTs is a bundle of other transformative technologies, including quantum computing, robotics, and nanotechnology, all deeply entangled and mutually reinforcing.
- Historical focus on atoms: Technology primarily manipulated physical matter (e.g., stone tools, electricity).
- Shift to information: Mid-20th century saw the rise of computer science and genetics, focusing on bits and genes.
- AI’s capabilities: Replicating speech, language, vision, and reasoning.
- Synthetic biology’s capabilities: Sequencing, modifying, and printing DNA, engineering life itself.
- Interconnected technologies: AI and synthetic biology form a “supercluster,” enabling breakthroughs in quantum computing, robotics, and nanotechnology.
The Rise of Large Language Models and Generative AI
The chapter highlights the “staggering” progress in large language models (LLMs) since the release of OpenAI’s ChatGPT in November 2022. LLMs, such as GPT-3 and GPT-4, are powerful chatbots capable of instantaneously generating fluent, coherent prose in various styles, writing essays, business plans, code, and even passing standardized tests. These models leverage the “attention” mechanism to understand language context and predict the next logical tokens. The exponential scaling of parameters (from 1.5 billion in GPT-2 to 175 billion in GPT-3 and beyond) and dramatic cost reductions have made these models increasingly efficient and accessible. This has fueled the burgeoning field of generative AI, allowing the creation of ultrarealistic images, audio, and soon video from simple text prompts, and empowering engineers with tools like Copilot for code generation.
- ChatGPT’s impact: Gained over a million users in a week, capable of generating diverse text outputs instantaneously.
- Transformer architecture: Models predict next tokens based on understanding the “attention map” of key words in a sequence.
- Exponential growth in parameters: GPT-2 (1.5B parameters) to GPT-3 (175B parameters), with costs plummeting tenfold.
- Generative AI expansion: Creation of ultrarealistic images (Stable Diffusion), music, games, and production-quality code (Copilot).
Brain-Scale Models and Scaling Hypothesis
The scale of modern AI systems is immense. Current LLMs are trained on trillions of words, a volume of information vastly exceeding what any human could process in a lifetime. This extensive training allows LLMs to achieve stunning performance in writing tasks and even excel in specialized domains like medical licensing exams. The concept of “brain-scale” models with trillions of parameters is becoming routine, with companies like Alibaba already claiming 10 trillion parameters. The amount of computation used to train large AI models has increased exponentially—nine orders of magnitude in less than 10 years for Inflection AI’s models. This rapid scaling is driven by the “scaling hypothesis,” which predicts that simply increasing data, parameters, and computation will continue to improve performance towards human-level intelligence and beyond.
- Vast training data: LLMs consume trillions of words, far surpassing human reading capacity.
- “Brain-scale” models: Development of models with trillions of parameters (e.g., Alibaba’s 10 trillion-parameter model).
- Exponential compute increase: Nine orders of magnitude increase in compute for best AI models in under 10 years.
- Scaling hypothesis: Continued improvements by simply growing models with more data, parameters, and computation.
Sentience and the Future of AI
The chapter discusses the controversy surrounding AI sentience, particularly with the LaMDA system and engineer Blake Lemoine’s conviction that it was conscious. Suleyman argues that while LaMDA was not sentient, AI has reached a point where it can convincingly appear to be conscious, highlighting a key problem: humans quickly adapt to breakthroughs, making them seem routine and mundane. He dismisses the debate about the Singularity (recursively self-improving AI) as a “colossal red herring,” distracting from the more pressing near-term capabilities. The focus, he asserts, should be on what systems can do, not esoteric questions of consciousness.
- LaMDA controversy: Engineer Blake Lemoine’s belief in LaMDA’s sentience despite its factual errors.
- Human adaptation to AI: Rapid normalization of astounding AI breakthroughs.
- Dismissal of Singularity debate: Argued to be a distraction from immediate, tangible AI progress.
- Focus on capability over consciousness: The critical question is what AI systems can achieve, not whether they are self-aware.
Capabilities: A Modern Turing Test
Suleyman proposes a “Modern Turing Test” that goes beyond conversational ability to assess an AI’s capacity for complex, open-ended, real-world action. This test would involve an AI successfully executing a goal like “Go make $1 million on Amazon in a few months with just a $100,000 investment,” requiring interpretation, judgment, creativity, and multi-domain action. This concept of “Artificial Capable Intelligence” (ACI) describes a system that can achieve complex goals with minimal oversight, marking the next stage of AI’s evolution. AI systems are increasingly interactive, with reliable memory, and can weave together long-term plans. The author predicts that within a few years, ACI will enable anyone to have a world-class “personal intelligence” capable of assisting with a vast array of goals, from vacation planning to election strategies.
- Modern Turing Test proposal: An AI’s ability to successfully act on complex, open-ended goals (e.g., make $1M on Amazon).
- Artificial Capable Intelligence (ACI): AI systems that can achieve complex goals and tasks with minimal oversight.
- Evolution of AI capabilities: From classification/prediction to interactive systems with reliable memory and long-term planning.
- Future ubiquity of ACI: Billions will have access to powerful personal intelligences, providing world-class assistance across diverse domains.
Chapter 5: The Technology of Life
This chapter introduces synthetic biology as the second core general-purpose technology of the coming wave, asserting that it will lead to an unprecedented transformation of life itself. Suleyman traces the evolution of genetic engineering from ancient selective breeding to modern bioengineering, highlighting the pivotal role of the Human Genome Project and the Carlson curve’s epic collapse in DNA sequencing costs. He describes the CRISPR revolution, which made gene editing as easy as text editing, and the emergence of DNA printers, enabling the manufacture of new genetic sequences and the field of synthetic biology.
The chapter explores the vast potential of biological creativity unleashed, from medical advances like gene therapies and anti-aging technologies to sustainable manufacturing and even biocomputers. It then delves into the convergence of AI and synthetic biology, exemplified by DeepMind’s AlphaFold solving the protein folding problem, demonstrating how AI can accelerate biological discovery. The chapter concludes by emphasizing that these technologies, while offering immense promise, are undergoing a blistering pace of change that demands serious attention.
DNA Scissors: The CRISPR Revolution
Modern bioengineering began in the 1970s, building on an understanding of DNA. The Human Genome Project (1988-2003), a multibillion-dollar effort, made the human genetic map legible, enabling a “Carlson curve” of dramatically falling DNA sequencing costs (from $1 billion in 2003 to under $1,000 by 2022). This made DNA sequencing a booming business. The CRISPR gene editing breakthrough in 2012 revolutionized the field, allowing genes to be edited with unprecedented ease and precision, almost like text. CRISPR-based systems promise prophylactic defenses against viruses, new treatments for diseases like sickle-cell disease and cancer, and the ability to engineer crops for drought resistance and higher yields.
- Human Genome Project’s impact: Sequenced 92% of human genome, making genetic information legible.
- Carlson curve: DNA sequencing costs fell a millionfold in under 20 years, 1,000 times faster than Moore’s Law.
- CRISPR breakthrough: Made gene editing precise and accessible, leading to gene-edited plants, animals, and potential human therapies.
- Democratization of science: Technologies like CRISPR made biological science accessible, allowing grad students to tackle complex experiments.
DNA Printers: Synthetic Biology Comes to Life
Gene synthesis, the manufacturing of genetic sequences, is akin to writing DNA. While existing for years, it has become faster, cheaper, and more efficient with new techniques like enzymatic synthesis. Companies like DNA Script are commercializing DNA printers, and benchtop DNA synthesizers costing as little as $25,000 are now available. This capability has given rise to synthetic biology, the ability to read, edit, and write the code of life, enabling “evolution by design.” Experiments like Craig Venter’s creation of Synthia (2010) and ETH Zurich’s first computer-produced bacterial genome (2019) demonstrate the rapid progress. The GP-write Consortium aims to reduce synthetic genome costs by 1,000-fold in 10 years, making biology the “ultimate distributed manufacturing platform.”
- Gene synthesis advancements: From hundreds of DNA pieces simultaneously to millions at once, with tenfold price drops.
- Rise of DNA printers: Commercialization by companies like DNA Script, with benchtop synthesizers available for $25,000.
- Synthetic biology’s core: Reading, editing, and writing the code of life; enabling “evolution by design.”
- Examples of manufactured life: Craig Venter’s Synthia (2010), ETH Zurich’s Caulobacter ethensis-2.0 (2019).
Biological Creativity Unleashed
Synthetic biology promises to transform numerous sectors. In medicine, gene therapies offer potential cures for sickle-cell disease, leukemia, and hereditary heart conditions. Personalized medicine tailored to individual DNA will become routine. Longevity and regenerative technologies, with companies like Altos Labs investing billions, aim to reset the epigenome and reverse aging, potentially extending human life spans beyond 100 years. Cognitive, aesthetic, and physical enhancements are also becoming plausible, raising ethical dilemmas like “gene doping” and the implications of the first gene-edited children born in China. Beyond medicine, synthetic biology can create sustainable materials (bioplastics, biofuels), transform agriculture (disease-resistant crops), and even lead to biocomputers, using DNA as an ultra-dense data storage mechanism and biological transcripters as logic gates.
- Medical advancements: Gene therapies for various conditions, personalized medicine based on DNA.
- Longevity technologies: Anti-aging research aiming to reset the epigenome and extend human lifespans.
- Human enhancement: Potential for cognitive, aesthetic, and physical modifications, raising ethical questions.
- Industrial transformation: Sustainable materials, agriculture, and the possibility of growing products like houses.
AI in the Age of Synthetic Life
AI is rapidly converging with synthetic biology, accelerating discovery and application. The protein folding problem, a decades-long grand challenge in biology, was largely solved by DeepMind’s AlphaFold in 2018 and 2020. AlphaFold’s ability to predict protein structures with high accuracy transformed computational biology, making previously weeks-long processes happen in seconds and revealing the structures of almost all known proteins. This has led to an “explosion” of applications in biological research. Transformer models are learning the “language of biology and chemistry,” generating new DNA sequences from natural language and predicting molecular properties. Furthermore, brain-interfacing technologies like Neuralink aim to connect human minds directly to computer systems, and scientists have even grown neurons in vitro that can play video games. This convergence signifies a “superwave,” where the automation and precision of AI accelerate the engineering of life itself.
- AlphaFold’s impact on protein folding: Solved a 50-year-old problem, predicting structures of almost all known proteins with AI.
- Accelerating biological research: AI tools enable faster vaccine discovery, biological circuit design, and molecular simulations.
- AI learning biology’s language: Transformer models generating new DNA sequences from natural language instructions.
- Brain-interfacing technologies: Neuralink and Synchron aiming to directly connect human minds with machines, blurring human and machine intelligence.
Chapter 6: The Wider Wave
This chapter broadens the discussion of the “coming wave” beyond just AI and synthetic biology, emphasizing that technological waves are vast “clusters of technologies” that intensely interact and mutually accelerate. Suleyman argues that while AI and bio are central, they are surrounded by a “penumbra” of other transformative technologies—including robotics, quantum computing, and advanced energy solutions—each significant in its own right but amplified by its cross-pollinating potential within the larger wave.
The chapter begins with the transformation of agriculture through robotics, illustrating how AI is pushing robots beyond single-task automation towards more general, adaptive capabilities. It then explores quantum computing’s potential to revolutionize computation and cryptography, and the promise of abundant clean energy through renewables and nuclear fusion. Suleyman concludes by looking ahead to the second half of the 21st century and the emergence of nanotechnology, emphasizing that this is a “superwave” that will fundamentally redefine what is possible in the material world.
Robotics Comes of Age
Robotics is described as AI’s physical manifestation, transforming industries from agriculture to logistics. The story of John Deere’s steel plow serves as a historical parallel to how autonomous tractors and combines are now revolutionizing farming with unprecedented precision. Modern robots, once limited to repetitive tasks in controlled environments, are now becoming more dexterous and adaptive, with examples like Amazon’s Proteus navigating warehouses and Google’s research division building robots for household chores. A key emerging capability is robot swarming, greatly amplifying individual robot power for tasks like construction or environmental remediation. The chapter highlights a pivotal event: the 2016 Dallas sniper incident, where a police bomb disposal robot was used for the first time to deliver lethal force, signaling the increasing integration of robots into sensitive, real-world situations.
- Agricultural transformation: Autonomous tractors and combines, precision planting, and herding cattle using AI.
- Evolution of robots: From single-task machines to dexterous, adaptive systems capable of general activities.
- Amazon’s warehouse robots: Proteus for autonomous navigation, Sparrow for individual product handling.
- Robot swarming potential: Thousands of miniature Kilobots working collectively for tasks like bridge construction or oil spill cleanup.
- Lethal force precedent: The Dallas police robot’s use of explosives to neutralize a sniper in 2016.
Quantum Supremacy
In 2019, Google announced “quantum supremacy,” achieving a calculation in seconds that would have taken a classical supercomputer 10,000 years. This marked a critical milestone for quantum computing, which leverages the unique properties of the subatomic world. While still nascent, quantum computing promises exponential increases in processing power with each added qubit. Its implications are far-reaching: from threatening existing cryptography (Q-Day) to revolutionizing optimization problems across industries like logistics and finance. Critically, quantum computing offers the ability to model chemical reactions and molecular interactions with unprecedented detail, accelerating the discovery of new pharmaceuticals, materials, and energy solutions. This positions quantum technology as another foundational element of the coming wave, speeding up other advancements like biotech and materials science.
- Google’s “quantum supremacy” (2019): A 53-qubit machine performed a calculation in seconds that would take 10,000 years for a classical computer.
- Exponential power: Each additional qubit doubles computing power.
- Cryptographic threat: “Q-Day” signifies the potential for quantum computers to break current encryption methods.
- Optimization capabilities: Greatly speeds up complex optimization problems (e.g., traffic modeling, efficient loading).
- Molecular modeling revolution: Unlocks understanding of chemical reactions, accelerating drug and material discovery.
The Next Energy Transition
Energy is a fundamental pillar of modern civilization, and the coming wave promises a revolution in clean, abundant power. Renewable energy is rapidly expanding, with solar power costs plummeting (from $4.88 per watt in 2000 to 38 cents in 2019), set to become the largest source of electricity by 2027. The dormant behemoth of clean energy is nuclear fusion, long considered the holy grail. Recent breakthroughs, including net energy gain at the National Ignition Facility in 2022, have reignited hope, making fusion’s arrival a question of “when, not if.” This brewing mix of solar, wind, hydrogen, improved batteries, and fusion promises to sustainably power the immense demands of the coming wave, from data centers to robotics, and overcome current energy limitations on technological progress.
- Renewable energy boom: Solar power costs plummeted by over 82%, set to be largest electricity source by 2027.
- Nuclear fusion breakthroughs: Joint European Torus (JET) achieved record power output; National Ignition Facility demonstrated net energy gain in 2022.
- Virtually limitless energy: Fusion promises clean and abundant power, transforming energy economics.
- Sustainable power for the wave: Underwrites the colossal power demands of future data centers and robotics.
The Wave Beyond the Wave
Looking further into the 21st century, the chapter anticipates breakthroughs like advanced nanotechnology, which aims to manipulate atoms individually. This concept envisions a world where atoms become controllable building blocks, capable of automatically assembling almost anything, from gossamer structures to components that power vehicles with minimal material. While still decades away, nanotechnology represents the “apotheosis of the bits/atoms relationship,” where the physical universe becomes a completely malleable platform. This future, currently the province of science fiction, is steadily coming into focus as the coming wave unfolds, promising to further reshape manufacturing and physical reality.
- Nanotechnology’s promise: Manipulating atoms individually to automatically assemble virtually anything.
- Atomic precision: Devices capable of endlessly engineering and recombining at the atomic scale.
- Extraordinary outputs: Nanomotors rotating billions of times a minute, powering cars with tiny amounts of material.
- Malleable physical universe: The dream of physical reality as a programmable platform, engineered by nanobots or replicators.
Chapter 7: Four Features of the Coming Wave
This chapter introduces four intrinsic features that uniquely define the “coming wave” and compound the containment problem: asymmetry, hyper-evolution, omni-use, and autonomy. Suleyman illustrates these characteristics with a compelling example: the 2022 Russian invasion of Ukraine, where a small, semi-improvised Ukrainian militia, Aerorozvidka, used weaponized consumer drones and AI to inflict disproportionate damage on a much larger conventional Russian force. This demonstrates how new technologies create unthinkable vulnerabilities against dominant powers and represent a “colossal transfer of power.”
The author argues that these features, while offering immense benefits, also escalate containment to a new plane of difficulty and danger. Hyper-evolution implies an unprecedented speed of development, making traditional regulation inadequate. Omni-use means technologies can be applied for countless purposes, including harmful ones. Finally, autonomy signals a qualitative shift where technology increasingly acts without immediate human approval, raising questions about control and unpredictability. Understanding these defining features, Suleyman contends, is vital for assessing the benefits and risks of the coming wave.
Asymmetry: A Colossal Transfer of Power
New technologies, exemplified by the Ukrainian use of weaponized consumer drones against Russia, create hugely asymmetric impacts. A small force can now leverage cheap, accessible, and scalable capabilities to undermine seemingly dominant powers. The $1,399 DJI Phantom drone can become a potent, precise, and potentially untraceable weapon when combined with AI. This represents a colossal transfer of power away from traditional states and militaries towards non-state actors or smaller entities. The reverse is also true: the interconnectedness of the coming wave creates new systemic vulnerabilities, where a single point of failure can cascade globally, making damage containment nearly impossible. AI, in particular, acts as a “lever with global consequences,” extending risks to entire societies.
- Ukraine war example: Aerorozvidka used jerry-rigged drones and AI to disrupt a 40km Russian convoy near Kyiv, highlighting asymmetric potential.
- Cost advantage in warfare: Precision strikes with consumer-grade drones and AI costing $15,000, versus $3 million Patriot missiles.
- Shift of power: From traditional states and militaries to anyone with capacity and motivation to deploy these devices.
- Systemic vulnerabilities: Interconnected global systems mean a single point of failure (e.g., a networked fleet of autonomous vehicles) can cascade globally.
Hyper-Evolution: Endless Acceleration
If containment requires a manageable pace of development, hyper-evolution—the second feature of the coming wave—presents a formidable challenge. The digital realm has already demonstrated bewildering pace, with Moore’s Law alone promising 100 times more compute per dollar in ten years. This digital hyper-evolution is now spreading to the “real world” of atoms. New tools allow for near-real-time experimentation and simulation, rapidly translating designs into concrete products. AI helps design new materials, optimize engineering, and even build cars with 3D-printed, organically melded parts. In biotech, tools like AlphaFold and software frameworks like Cello accelerate biological design and experimentation, allowing cycles of designing, building, and testing at unprecedented speeds. This means innovation outside the weightless world of code will start moving at a digital pace, with reduced friction and dependencies.
- Moore’s Law and beyond: Expected 100x more compute for the same dollar in 10 years.
- Digital pace for atoms: Innovation in the real world will move at digital speed, with near-perfect simulations translating to concrete products.
- AI-driven design: AI finds new materials, optimizes lithium configurations, designs 3D-printed car parts with organic forms.
- Accelerated biotech: AlphaFold and Cello frameworks compress biological evolution, speeding up vaccine discovery and cell programming.
Omni-Use: More Is More
The third feature, omni-use, describes technologies with extreme versatility that can be used for many different purposes, both civilian and military, good and bad. The author cites automated drug discovery as a prime example: an AI system that sifted 100 million molecules to find antibiotics could also identify 40,000 molecules with toxicity comparable to chemical weapons in six hours. This reveals the inherent “dual-use” nature of frontier biology. However, Suleyman argues that “omni-use” is a more appropriate term, emphasizing the fundamental generality of technologies like AI and synthetic biology. These aren’t narrow tools but general-purpose technologies embedded everywhere, like “the new electricity,” making them incredibly difficult to contain because their weaponizable or harmful uses are possible regardless of original intent.
- Drug discovery’s dual use: AI designed to find antibiotics can also identify chemical weapons.
- Beyond dual-use: Omni-use emphasizes the fundamental generality and extreme versatility of new technologies.
- AI as “new electricity”: Permeates and powers almost every aspect of daily life, society, and the economy.
- Gato’s generalism: DeepMind’s Gato can perform over 600 different tasks across domains, from games to robotics.
Autonomy and Beyond: Will Humans Be in the Loop?
Autonomy is the fourth, qualitatively different feature of the coming wave, where systems can interact and take actions without immediate human approval. Historically, technology was a tool, but now it can “come to life.” Autonomous vehicles, for instance, can drive with minimal human input, aspiring to Level 5 autonomy. AI systems, like AlphaGo, can discover their own effective strategies without human hand-coding. The worry is that as systems self-improve, they will eventually bypass human oversight. Germ-line gene editing, once changes are made, could propagate for millennia, reverberating beyond human control. The increasing complexity of new technologies, like quantum computing and unexplainable neural networks (“black boxes”), means humans are increasingly unable to comprehend their granular workings, leading to a point where technology can fully direct its own evolution.
- Autonomous systems: Take actions without immediate human approval (e.g., self-driving cars, drone swarms).
- Self-improving AI: Systems can find their own strategies (AlphaGo) and potentially automate their own R&D.
- Unforeseen biological impacts: Germ-line gene edits could reverberate for millennia beyond control.
- Limits of human comprehension: Many new technologies are “black boxes,” beyond individual understanding or explanation.
- “Gorilla problem”: If AI becomes smarter than humans, humanity could be contained by its own creation.
Chapter 8: Unstoppable Incentives
This chapter pivots to explore the deep-rooted and fundamentally human incentives that ensure the “coming wave” is unstoppable and uncontainable under current conditions. Suleyman begins by recounting the AlphaGo matches in South Korea and China, which revealed a profound geopolitical dimension to AI beyond a mere technical challenge, igniting a new great power competition.
The author argues that while individual motivations vary, the development and proliferation of technology are propelled by:
- Geopolitical rivalry: Nations feel an existential need for technological superiority, seeing it as a “sharp weapon.”
- Openness imperative: Science and technology thrive on shared knowledge and open-source practices.
- Immense financial gains: Profit motive drives innovation and investment, promising trillion-dollar economic boosts.
- Urgent global challenges: The need to solve crises like climate change and demographic shifts.
- Human ego: Scientists and technologists are driven by status, legacy, and the desire to push boundaries.
These incentives, Suleyman asserts, are interlocking and mutually reinforcing, making it nearly impossible to halt or significantly redirect the wave.
Geopolitical Rivalry: The New Arms Race
The AlphaGo match in Seoul served as a “Sputnik moment” for China, revealing its lag in AI and igniting a national commitment to become the world leader in AI by 2030. China’s New Generation Artificial Intelligence Development Plan explicitly aims for “world-leading levels” in AI, backed by massive state resources. This is part of a broader technological arms race beyond just the US and China, involving the EU, India, and other nations, each seeking strategic advantage in areas like biotech and quantum computing. The author argues that this is not a bluff but a tangible, escalating competition with widespread technological development happening openly.
- China’s Sputnik moment: AlphaGo’s victory in Go spurred China to prioritize AI development.
- National AI strategy: China aims to be world leader in AI by 2030, investing heavily in R&D and talent.
- Broader technological arms race: Involves EU, India, Germany, Japan, South Korea, and Israel in areas like biotech and quantum.
- Open development: Most advancements are shared through patents, academic conferences, and media, making the race visible.
Knowledge Wants to Be Free: The Openness Imperative
The openness imperative is a powerful incentive driving technological proliferation. Since the Scientific Revolution, scientific discoveries have been shared openly in journals and conferences, fueled by the patent system. This culture saturates modern research, with academia built around peer review and publication records (e.g., arXiv, bioRxiv). Large tech companies like Google, Meta, and Microsoft, despite their proprietary interests, regularly contribute huge amounts of IP to open-source software and publish their cutting-edge research. This global, distributed system of knowledge development makes it almost impossible to steer or shut down, with innovations diffusing rapidly and unpredictably from obscure research to widespread public use.
- Open sharing of knowledge: Core value for scientific and technological research since the Scientific Revolution.
- Academic culture: Built around peer review, publication records, and public dissemination of findings.
- Open-source software: Large tech companies contribute huge amounts of IP for free (e.g., GitHub, arXiv, bioRxiv).
- Accelerated research landscape: Worldwide R&D spending exceeds $700 billion annually, with top companies investing tens of billions.
The $100 Trillion Opportunity: The Profit Motive
The profit motive is arguably the most persistent and entrenched incentive driving the coming wave. The 1840s railway boom serves as a historical precedent, demonstrating how speculative frenzies, despite crashes, establish new technological substrates. Today, the coming wave represents the “greatest economic prize in history,” with PwC forecasting $15.7 trillion from AI by 2030 and McKinsey a $4 trillion boost from biotech. Corporations like Apple and Google, with trillion-dollar valuations, are investing hundreds of billions in AI and robotics, seeing them as ways to boost profits and gain competitive advantage. This relentless pursuit of financial reward, fueled by insatiable consumer demand, creates an ingrained incentive to continually develop and roll out new technologies, ultimately accelerating economic growth and improving living standards.
- Historical precedent: The 1840s railway boom showed how profit-driven investment creates new technological infrastructure.
- Massive economic forecasts: AI could add $15.7 trillion to global economy by 2030; biotech, $4 trillion.
- Corporate dominance: Major tech companies (Apple, Google) have trillion-dollar valuations and immense R&D budgets, driving innovation.
- Competitive imperative: Companies must adopt or leapfrog new technologies to avoid losing market share.
Global Challenges: The Necessity of New Technologies
Beyond profit and geopolitical advantage, humanity needs new technologies to address grand global challenges. Historically, technology has driven progress, such as increasing crop yields sixteen-fold since the 13th century and reducing extreme poverty. Today, the world faces climate change (2 degrees Celsius warming or more), resource scarcity, and spiraling healthcare costs. Existing technologies are insufficient to meet these demands; for instance, decarbonizing the economy requires new materials and energy solutions, while feeding a growing population demands increased food production. New technologies from the coming wave, like AI-designed enzymes for plastic breakdown or quantum computers for battery innovation, are seen as critical levers to avoid stagnation and collapse, making their development a moral imperative for survival and flourishing.
- Feeding a growing population: Crop yields increased 16-fold since 13th century, but need 50% more food by 2050.
- Climate emergency: World heading for 2°C+ warming; massive re-engineering of agricultural, manufacturing, and energy systems needed.
- Resource scarcity: Demand for critical battery materials (lithium, cobalt) to rise 500% by 2030.
- Demographic crisis: Aging populations and dwindling labor forces will make it impossible to maintain living standards without new technologies.
Ego: The Human Drive
The human ego is a powerful, often underestimated, incentive for technological progress. Scientists and technologists are driven by status, success, and legacy, constantly seeking to be first and best. Figures like J. Robert Oppenheimer exemplified this “technically sweet” mindset, where the feasibility of an invention overrides concerns about its consequences. This competitive drive, whether noble or self-serving, propels individuals to push boundaries, explore the unknown, and “change the game.” The Silicon Valley mythos of the heroic start-up founder embodies this archetype, further fueling the relentless creation of new technologies. This inherent human impulse makes the pursuit of the coming wave deeply ingrained and difficult to suppress.
- Quest for status and success: Scientists and technologists desire to be first, best, and recognized.
- “Technically sweet” mindset: J. Robert Oppenheimer’s view that technical feasibility compels action regardless of consequences.
- Competitive drive: Desire to beat rivals and impress peers fuels risk-taking and exploration.
- Silicon Valley mythos: Reinforces the archetype of the visionary founder driving change for its own sake.
Chapter 9: The Grand Bargain
This chapter examines the fragility of the nation-state—the central unit of global political order—in the face of the coming technological wave. Suleyman argues that the state’s “grand bargain”—providing peace, prosperity, and security through centralized power—is fracturing, and technology is a critical driver of this transformation. He shares his personal disillusionment with traditional governance from his time in local government and UN negotiations, which led him to believe technology could offer more effective solutions at scale.
The chapter describes the “nervous states” of Western societies, beset by declining public trust, rising inequality, and accelerating populism. These existing vulnerabilities, amplified by the coming wave, make the task of containment—ensuring technology benefits humanity—even more daunting. Suleyman asserts that technology is inherently political, not value-neutral, deeply intertwined with state functions and societal structures. He concludes that failing states and authoritarian regimes, whether hollowed out or hyper-empowered by technology, are ill-equipped to contain the wave, ultimately leading towards chaos or new forms of repression.
The Fragile State: Nervous Democracies
Western societies are characterized as “nervous states,” impulsive and fractious, due to previous shocks (financial crises, pandemics) and growing pressures. Trust in government has collapsed, particularly in America, where it’s fallen from over 70% in the postwar era to under 20% for recent presidents. This distrust extends to non-government institutions, reflecting a widespread belief that society is failing. Democracy is in decline globally, with more countries sliding backward since 2010. Rising nationalism and authoritarianism are endemic, fueled by surging inequality, which has concentrated wealth in a tiny clique and led to social resentment and political instability. This precarious baseline makes states ill-prepared to manage the complex challenges of the coming wave.
- Declining trust in government: U.S. public trust plummeted from over 70% (Eisenhower) to under 20% (Obama, Trump, Biden).
- Global democratic decline: More countries slid backward on democracy measures since 2010.
- Surging inequality: Top 1% share of national income in U.S. almost doubled since 1980, leading to concentrated wealth.
- Social resentment: Inequality linked to political violence, riots, and civil wars in over 100 countries.
Technology is Political: The Wave’s Challenge to States
Suleyman argues that technology is inherently political, not value-neutral, deeply intertwined with state functions and societal structures. The previous wave (internet, smartphones) contributed to political polarization and institutional fragility. Technology has eroded sovereign borders, creating global flows of information, capital, and goods, and is now a critical component of geopolitical strategy. Just as writing, clocks, and the printing press shaped earlier states, so AI, robotics, and synthetic biology will have profound political consequences. The state’s monopoly on force, once bolstered by weapons like gunpowder and cannons, now faces challenges from democratized technologies. The coming wave will force a reconfiguration of the grand bargain, as states struggle to maintain security, welfare, and stable innovation frameworks.
- Social media’s impact: Contributed to political polarization, distrust in politics, hate, and social divisions.
- Erosion of state borders: Technology enables global flows of people, information, and capital, challenging traditional sovereignty.
- Technology’s historical role: Writing (administrative tools), clocks (standardized time), printing press (national languages) shaped early states.
- Weapons and state power: Cannons concentrated lethal power in the state’s hands, but democratized tech challenges this monopoly.
Future Trajectories: Zombie Governments vs. Techno-Dictatorships
Suleyman forecasts two main trajectories for nation-states under the pressure of the coming wave, both disastrous for containing technology. One path leads to “zombie governments”: liberal democracies where traditional trappings remain but core services are hollowed out, the polity unstable and fractious. The other path involves authoritarian states leveraging the tools of the coming wave to tighten their grip on power, creating “supercharged Leviathans” that go beyond historical totalitarian regimes. These techno-dictatorships would achieve unprecedented levels of surveillance and control. Neither flailing bureaucracies nor all-powerful dictators are equipped to effectively manage powerful new technologies, leading to either chaos or new forms of repression, and fundamentally disrupting the state’s ability to ensure net benefit to its citizens.
- Zombie governments: Hallowed out liberal democracies with threadbare services and unstable polities, lurching from crisis to crisis.
- Techno-dictatorships: Authoritarian regimes using the coming wave’s tools (surveillance, AI, biotech) to entrench unprecedented control.
- Failure of containment: Neither type of state is capable of managing and directing the complex, fast-moving technological wave effectively.
- Undermining the grand bargain: The delicate balance holding states together tips into chaos or extreme repression.
Chapter 10: Fragility Amplifiers
This chapter delves into specific, near-term examples of how the coming wave will amplify existing fragilities within the nation-state, potentially leading to widespread instability and breakdowns in core functions. Suleyman uses the 2017 WannaCry ransomware attack on Britain’s NHS as a stark parable, demonstrating how advanced cyberweapons (even conventional ones) can disable critical infrastructure and cause massive disruption, often originating from stolen state-sponsored tech.
The chapter outlines key “fragility amplifiers”:
- Lethal autonomous weapons: Reducing barriers to violence and enabling deniable attacks by non-state actors.
- The misinformation machine: Industrializing disinformation through deepfakes and AI-enhanced campaigns.
- Leaky labs and unintended instability: Accidental releases of pathogens from biosafety labs, even from well-intentioned research.
- The automation debate: Technological unemployment causing widespread economic dislocation and social resentment.
These stressors, Suleyman argues, will converge, shaking the state’s foundation and making the challenge of containment even more acute.
National Emergency 2.0: Uncontained Asymmetry in Action
The WannaCry ransomware attack (2017), which crippled Britain’s NHS and 250,000 computers globally, serves as a prime example of uncontained asymmetry. The attack exploited EternalBlue, a cyberweapon developed by the U.S. National Security Agency (NSA) and stolen by the Shadow Brokers, eventually used by North Korean hackers. This demonstrated how powerful, supposedly secure state technologies can proliferate to hostile actors, causing global disruption. Although WannaCry was conventional, it highlighted institutional vulnerability and the state’s limited role in recovery (Marcus Hutchins found a kill switch). This signals a “National Emergency 2.0,” where next-generation AI cyberweapons could continuously adapt and exploit weaknesses across interconnected global systems, threatening critical infrastructure and basic state functions.
- WannaCry attack (2017): Disabled NHS and systems in 150 countries, costing up to $8 billion.
- Origin of the threat: Exploited EternalBlue, a cyberweapon stolen from the U.S. NSA.
- NotPetya attack (2017): A new version specifically targeted Ukraine’s national infrastructure, nearly bringing it to its knees.
- Future cyberweapons: AI-enabled cyberweapons will continuously learn and adapt, exploiting network weaknesses autonomously.
Robots with Guns: The Primacy of Offense
The 2020 assassination of Iranian scientist Mohsen Fakhrizadeh by an AI-assisted, remote-control sniper robot is presented as a “harbinger of what’s to come.” This event, where a human authorized the strike but AI precisely aimed the weapon, signals a future where sophisticated armed robots reduce barriers to violence. Examples like Boston Dynamics’ Atlas and BigDog demonstrate robots’ increasing dexterity and navigation skills. The cost of military-grade drones has fallen by three orders of magnitude in a decade, with $26 billion expected to be spent annually by 2028, leading to fully autonomous deployments (e.g., AI drone swarms in Gaza). AI-enhanced cyberweapons will continuously improve and adapt, finding hidden points of failure in legal or financial systems, and even developing psychological tricks to manipulate human behavior (e.g., Meta’s CICERO).
- Fakhrizadeh assassination: Killed by an AI-assisted, remote-control sniper robot in 2020.
- Robot capabilities: Boston Dynamics’ Atlas and BigDog demonstrate uncanny navigation and dexterity.
- Drone costs plummeting: Military-grade drone costs fell by 3 orders of magnitude; autonomous drones used in Gaza.
- AI cyberweapons: Continuously probe networks, adapt, find legal/financial exploits, and manipulate human psychology.
The Misinformation Machine: The Deepfake Era
The proliferation of deepfakes and AI-enhanced synthetic media represents a significant “fragility amplifier,” threatening an “Infocalypse” where the information ecosystem collapses. Examples include a deepfake of an Indian politician in 2020 and a doctored Nancy Pelosi video. These tools allow anyone to create and broadcast hyperrealistic content (text, image, video, audio) that is indistinguishable from genuine media. The risk lies not just in obvious fakes but in subtle, nuanced distortions of plausible scenarios. State-sponsored information assaults, like Russian interference in the 2016 U.S. election or COVID-19 disinformation campaigns by bots, will be turbocharged by high-quality synthetic media, making them cheaper, more effective, and surgical in their targeting. This undermines trust, exploits social divisions, and creates an environment where verifying information becomes impossible.
- Indian election deepfake (2020): A candidate’s voice was deepfaked to reach new constituencies.
- Nancy Pelosi doctored video: Reedited to make her appear impaired, circulated widely on social media.
- Financial fraud: A Hong Kong bank transferred millions due to a deepfake voice impersonation.
- State-sponsored info ops: Russia’s 2016 election interference and COVID-19 disinformation campaigns amplified by AI.
- “Infocalypse”: Ubiquitous, perfect synthetic media leading to a collapse of trust and social cohesion.
Leaky Labs and Unintended Instability
Even well-intentioned research can lead to catastrophic consequences through lab leaks. The 1977 Russian flu epidemic is presented as a plausible example of a lab escape during vaccine experiments, killing up to 700,000 people. Despite Biosafety Level 4 (BSL-4) labs having the highest containment standards, accidents still occur (e.g., 1979 Soviet anthrax leak, 2007 UK foot-and-mouth outbreak, 2021 smallpox vials found in an unsecured freezer). SARS has escaped from labs multiple times due to human error. This grim history, combined with the booming number of BSL-4 labs and the prevalence of gain-of-function (GOF) research (deliberately engineering pathogens for increased lethality/infectiousness), means accidents are statistically inevitable. A 2014 U.S. risk assessment estimated a 91% chance of a “major lab leak” over a decade, with a 27% chance of a resulting pandemic.
- 1977 Russian flu epidemic: Best explanation is a lab leak during vaccine experiments.
- Historical lab accidents: 1979 Soviet anthrax leak (66+ deaths), 2007 UK foot-and-mouth outbreak, 2021 smallpox vials found unsecured.
- SARS escapes: Escaped from labs in Singapore, Taiwan, and Beijing multiple times due to human error.
- Gain-of-function (GOF) research risks: Deliberately engineering more lethal/infectious pathogens, with potential for accidental release (e.g., Boston University COVID variant research).
- High probability of lab leaks: A 2014 U.S. risk assessment estimated a 91% chance of a major lab leak in a decade.
The Automation Debate: Jobs and Economic Dislocation
The impact of automation on jobs is a significant “fragility amplifier.” While historically new technologies have displaced old jobs but created new ones, the coming wave’s pervasive and efficient AI threatens to replace “intellectual manual labor” (e.g., administration, customer service, copywriting) at an unprecedented scale. Early analysis suggests ChatGPT boosts productivity of “mid-level college educated professionals” by 40%. McKinsey estimates over half of all jobs could see many tasks automated in seven years, affecting 52 million Americans by 2030. This, combined with labor market frictions (skills, geography, dignity), could lead to widespread unemployment, cratering tax receipts, and increased social resentment. Even optimistic scenarios acknowledge significant medium-term disruptions, making automation a major stressor for governments and societies globally.
- AI’s labor-replacing potential: Efficiently and cheaply replacing “cognitive manual labor” in administration, customer service, writing, etc.
- Productivity boosts: ChatGPT boosts mid-level professionals’ productivity by 40%.
- Job displacement forecasts: McKinsey: >50% of job tasks could be automated in 7 years; 52 million Americans face “medium exposure to automation” by 2030.
- Economic ramifications: Tax receipt declines, strain on welfare programs, and increased social resentment from unemployment.
Chapter 11: The Future of Nations
This chapter explores the tectonic, long-term implications of the plummeting cost of power for the nation-state, warning of “techno-political earthquakes” that will fundamentally reshape society. Suleyman uses the historical example of the stirrup, a seemingly simple invention that revolutionized cavalry warfare and fundamentally reshaped European feudal society for a thousand years. This illustrates how small technological changes can create new centers of power with new social infrastructures.
The chapter predicts that the coming wave will produce seemingly contradictory trends: power is both concentrated and dispersed. It will lead to massive new corporate concentrations of power and wealth, akin to modern-day empires rivaling nation-states, alongside fragmentation into smaller, self-sufficient, and ideologically diverse entities. The author concludes that this will create a turbulent, “post-sovereign” world, where the state’s ability to govern is challenged from above and below, leading to deep instability and calling into question the viability of some nations altogether.
The Stirrup: A Historical Precedent for Techno-Political Shifts
The stirrup, a seemingly rudimentary invention, revolutionized cavalry warfare by fixing the rider and spear to the charging horse, turning them into a single, overwhelming unit. This tipped the balance of power in favor of offense, as demonstrated by Charles Martel’s defeat of the Saracens. However, adopting heavy cavalry required immense supporting changes in Frankish society, leading to the expropriation of church lands to fund a warrior elite. This improvised pact evolved into feudalism, a complex system of politics, economics, and culture that structured European life for nearly a thousand years. The stirrup highlights how new technologies create new centers of power and fundamentally reshape social infrastructures, serving as a powerful historical analogue for the potential impact of the coming wave.
- Stirrup’s impact: Revolutionized cavalry, making horse and rider a single, powerful unit.
- Shift in warfare: Overwhelming shock tactic that could break infantry lines.
- Societal restructuring: Required immense resources for horses and training, leading to expropriation of church lands.
- Foundation of feudalism: The pact between the king and warrior elite grew into the dominant political form of the medieval period.
Concentrations: The Compounding Returns on Intelligence
The coming wave will accelerate massive new concentrations of power and wealth, creating corporations with scale and influence rivaling nation-states. Historically, the British East India Company ruled vast territories with its own army and fleet, demonstrating corporate power beyond traditional state boundaries. Today, megacorporations like Apple and Google have trillion-dollar valuations, more assets than entire countries, and control vast sections of the economy and human experience (“Googlization”). They own the largest clusters of AI processors, advanced models, and robotics capacity. This leads to a “superstar” effect, where leading players take disproportionate shares of wealth and power. The Samsung Group in South Korea, representing up to 20% of the Korean economy, is an example of a sprawling corporate empire functioning almost like a parallel government. This trend suggests a future where private interests step into spaces vacated by states, potentially offering services traditionally provided by governments.
- Historical precedent: British East India Company as a private company ruling vast territories, rivaling states.
- Megacorporations today: Apple and Google with trillion-dollar valuations, controlling vast economic and experiential segments.
- “Superstar” effect: Top 10% of global firms take 80% of total profits, concentrating wealth and power.
- Samsung Group example: Represents up to 20% of Korean economy, operating like a “parallel government.”
- Future corporate roles: Corporations potentially providing education, defense, currency, or law enforcement.
Surveillance: Rocket Fuel for Authoritarianism
The coming wave presents a disturbing possibility for authoritarian states to achieve unprecedented levels of centralized power and control, creating a “new kind of entity altogether.” Historically, totalitarian regimes (Soviet Union, Mao’s China) failed to fully control society due to insufficient tools. Now, the convergence of AI, pervasive sensors, and mass data collection enables a “perfect 21st-century surveillance apparatus.” China is the preeminent example, with its “Sharp Eyes” facial recognition program aiming for 100% public space surveillance, massive databases of faces and bio-data, and centralized services like WeChat. This AI-enabled system can spot dissent in real-time, allowing for a “seamless, crushing government response.” This technology is also exported globally and adopted by Western firms (e.g., tracking worker movements), raising the prospect of a global “high-tech panopticon” where every detail of life is monitored and potentially coerced.
- Historical totalitarian failures: Past regimes lacked tools for complete societal control.
- Pervasive data harvesting: Smart devices, CCTV, facial recognition, and bio-data collection logging every aspect of life.
- China as leading example: “Sharp Eyes” program for 100% public space surveillance, with a database of 2.5 billion facial images.
- Integration of data: Ministry of Public Security aims to stitch together disparate databases (license plates, DNA, WeChat).
- Uighur repression: Xinjiang Autonomous Region as a horrifying demonstration of technologically empowered ethnic cleansing.
Fragmentations: Power to the People
Paradoxically, the coming wave also points towards decentralization and fragmentation, creating a “Hezbollahization” where small, state-like entities become more plausible. Hezbollah operates as a “state within a state” in Lebanon, providing military force, political representation, and social services. Similarly, a combination of AI, cheap robotics, and advanced biotech coupled with clean energy could make living “off-grid” economically viable. This would enable communities to self-organize viable societies on their own terms, providing services like credit unions, schools, and health care independently of large nation-states. The open-source nature of AI breakthroughs and the ease of DNA sharing facilitate this, empowering “every sect, separatist movement, charitable foundation, and social network, every zealot and xenophobe, every populist conspiracy theory, political party, or even mafia, drug cartel, or terrorist group” to attempt state-building.
- Hezbollah example: Operates as a “state within a state” in Lebanon, providing military, political, and social services.
- Off-grid viability: Cheap solar, AI, robotics, and biotech enable self-sufficient communities.
- Localized services: Adaptive education systems, localized healthcare, and private security possible for smaller groups.
- Open-source empowerment: AI models and DNA can be easily accessed and modified, accelerating decentralized power.
- Techno-libertarian vision: Some technologists actively welcome the state’s decline as liberation for “sovereign individuals.”
The Coming Wave of Contradictions
The future of nations will be shaped by simultaneous, conflicting forces of centralization and decentralization. On one hand, massive AI models and resource-intensive infrastructure will lead to unprecedented corporate concentrations, creating “superstar” firms larger than many states. On the other hand, accessible AI tools and open-source models will empower billions of individuals and small groups to exert unprecedented influence and self-organize. This means that AI-driven decisions with political implications (loans, jobs, military assignments) will occur in both centralized and decentralized ways. This “complex, mutually reinforcing dynamic” will alter flows of power, reinforce some hierarchies while overturning others, and create immense stress and volatility for the liberal democratic nation-state system, ultimately challenging its viability.
- Conflicting trajectories: Immense centralizing (corporate power) and decentralizing (individual empowerment) forces.
- Dual impact of AI: Operates as a massive, state-spanning system or as low-cost, village-level tools.
- Multiple ownership structures: Open-source collectives, corporate leaders, and government-held technologies coexisting.
- Increased societal stress: Unpredictable amplification of power and wrenching disruption of new capabilities stress the nation-state.
Chapter 12: The Dilemma
This chapter brings the book’s core argument to a stark conclusion: the “great dilemma” of the 21st century. Suleyman asserts that humanity faces a choice between catastrophe and dystopia, both unacceptable outcomes. He begins by recounting historical catastrophes (e.g., plagues, world wars) to contextualize the expanded scale of risk presented by the coming wave’s technologies. He warns that uncontained AI and synthetic biology could lead to engineered pandemics, autonomous warfare, or widespread social collapse.
The chapter then presents the other horn of the dilemma: that the most secure solutions for containment are equally unacceptable, leading to an authoritarian and dystopian pathway of total surveillance and control. Suleyman argues that the prevalent “pessimism-averse complacency” towards these risks is itself a recipe for disaster. He challenges the notion of a “stagnation” option, arguing that halting technological development would lead to a different kind of catastrophe—societal collapse due to unmet global challenges and demographic decline. The chapter concludes by stating that this bind is inevitable, and technology’s ultimate failure would be leaving humanity with no good options.
Catastrophe: The Ultimate Failure
The chapter warns that with the coming wave, humanity is poised for a new leap in catastrophic potential, expanding both the upper bound of risk and the number of avenues for unleashing destructive force. This means that engineered pandemics, autonomous warfare, and widespread social collapse are more plausible than ever. Suleyman illustrates this with plausible scenarios: AI-equipped drone swarms targeting specific profiles in dense urban areas, mass murderers using bespoke pathogens at political rallies, or hostile conspiracists unleashing surgically constructed disinformation to ignite violent riots. He stresses that current AI systems are not yet fully autonomous for these ends, but their rapid diffusion and increased capabilities mean that such “bad actor empowerment” is on the horizon. The chapter dismisses “paper-clip maximizers” as a distraction, focusing instead on how AI will amplify existing human vices and errors, leading to systemic failures across critical infrastructures.
- Expanded risk scale: Humanity expanding the upper bound of risk and destructive avenues.
- AI-enabled terror: Drone swarms with automatic weapons and facial recognition for mass targeting in cities.
- Engineered pandemics: Bespoke pathogens spread at political rallies, designed for high transmissibility and lethality.
- Disinformation chaos: Surgically constructed deepfake videos igniting violent riots and cascade effects.
- Automated warfare: Wars sparked accidentally by AIs, escalating quickly with alien and destructive consequences.
- Unintended errors: AIs making mistakes in fundamental infrastructures (energy grids, medical systems), leading to widespread havoc.
Cults, Lunatics, and Suicidal States
The chapter highlights that while many catastrophic risks arise from well-intentioned research (e.g., lab leaks), some organizations are founded with malicious intent. The Japanese doomsday cult Aum Shinrikyo serves as a chilling example. Despite their bizarre beliefs, Aum was a highly sophisticated group with well-trained scientists, who amassed over $1 billion in assets and embarked on a huge biological and chemical weapons program. They experimented with anthrax, C. botulinum, and sarin, planning to engineer enhanced pathogens. Though their attempts often failed due to manufacturing errors or ineffective delivery, they demonstrated a frightening level of ambition to initiate global collapse. Suleyman argues that while such organizations are rare, the democratization and commoditization of destructive tools by the coming wave mean that even a single future “Aum Shinrikyo” could trigger a catastrophe “orders of magnitude worse” than the Tokyo subway attack, making this a game of “Russian roulette.”
- Aum Shinrikyo cult: Japanese doomsday cult that amassed $1 billion and pursued biological/chemical weapons.
- Sophistication of intent: Recruited scientists, experimented with nerve agents (sarin, soman), and attempted anthrax engineering.
- Failures due to early tech limitations: Manufacturing errors and ineffective delivery mechanisms prevented larger catastrophes.
- Democratization of destruction: Coming wave makes such tools widely accessible, meaning rare malicious actors pose vastly greater threats.
The Dystopian Turn
Faced with the threat of catastrophe, governments will likely conclude that the only solution is tightly controlling every aspect of technology, leading to a dystopian pathway of total surveillance and coercion. This means not just monitoring everything (every lab, server, code, DNA string) but also reserving the capacity to stop and control it wherever necessary. Suleyman warns of a self-reinforcing “AI-tocracy” of steadily increasing data collection and coercion, with China’s surveillance apparatus serving as a chilling blueprint. He points to the public’s tolerance for drastic measures during COVID-19 as evidence of their willingness to trade liberty for safety in times of crisis. This path would mean erasing self-determination, freedom, and privacy, rolling back hard-won rights, and creating a “megamachine” of machine surveillance and control that metastasizes into society-strangling forms of domination.
- Centralized control as response: Governments will seek to tightly control technology to prevent catastrophes.
- “AI-tocracy”: Self-reinforcing system of data collection and coercion.
- China’s blueprint: Its surveillance apparatus as a model for total control of life.
- Public tolerance: COVID-19 pandemic showed high public tolerance for society-wide closures in the name of safety.
- Erosion of freedoms: Hard-won rights and national self-determination rolled back in the face of security imperatives.
Stagnation: A Different Kind of Catastrophe
Suleyman challenges the notion that halting technological development is a viable solution, arguing that “stagnation” would lead to a different kind of catastrophe: societal collapse. He contends that modern civilization “writes checks only continual technological development can cash,” as it’s premised on long-term economic growth, which itself relies on new technologies. Without innovation, societies hit hard limits in energy, food, and social complexity, leading to implosion. The global demographic crisis (declining working-age populations in Japan, Germany, China, and soon India) means it will be impossible to maintain living standards without new technologies to replace workers. Furthermore, without new solutions in areas like materials science (for green tech) and agriculture, the world cannot meet challenges like climate change and resource constraints. Stagnation, therefore, is not a way out of the dilemma but an “invitation to another kind of dystopia.”
- Societal dependence on tech: Modern civilization relies on continuous technological development for economic growth and living standards.
- Historical collapses: Civilizations average 400 years before collapse due to limits in energy, food, or social complexity.
- Demographic crisis: Declining working-age populations globally make it impossible to maintain living standards without technological solutions.
- Resource constraints: Demand for materials like lithium, cobalt, and graphite will rise 500% by 2030, requiring new tech for substitutes.
- Unmet global challenges: Climate change and rising healthcare costs cannot be addressed without new technological breakthroughs.
Chapter 13: Containment Must Be Possible
This chapter pivots from the dire warnings of the “dilemma” to assert that, for humanity’s sake, “containment must be possible.” Suleyman acknowledges the pervasive belief that “regulation” is the easy answer, but quickly dismisses it as “classic pessimism-averse” complacency—insufficient alone to address the speed, complexity, and global reach of the coming wave. He highlights the inherent problems with regulation, such as its slow pace compared to hyper-evolutionary tech, and the fragmented, siloed nature of current ethical discussions.
The author calls for an “Apollo program for technical safety” in AI and synthetic biology, emphasizing the need for massive funding, increased researcher numbers, and integrated safety-by-design approaches. He proposes a new, unified “grand bargain” for containment, moving beyond scattered efforts to a coherent program that operates across technical, cultural, legal, and political mechanisms. This section sets the stage for the concrete steps towards containment, stressing that while the odds are stacked against us, the mission to sculpt the wave rather than be overwhelmed by it is humanity’s best shot at flourishing.
The Problem with Regulation Alone
Suleyman argues that simply saying “Regulation!” is an easy, but insufficient, and “pessimism-averse” response to the coming wave. Governments are ill-suited to the complex, fast-moving challenges, often overstretched and lacking deep domain expertise. The pace of technological evolution far outstrips legislative processes (e.g., Ring doorbells changed suburban privacy before regulation caught up). Current discussions on technology ethics are fragmented across silos, failing to unify the disparate dimensions of risk. Moreover, nations are caught in a contradiction: they compete to accelerate tech development while simultaneously seeking to regulate it, making coherent international coordination difficult.
- Governmental unpreparedness: Overstretched governments lack expertise and are slow to act on fast-moving tech.
- Pace mismatch: Legislation takes years, while technology evolves weekly, rendering regulations quickly outdated (e.g., Ring doorbells).
- Fragmented discourse: Technology discussions are siloed (algorithmic bias, bio-risk, drone warfare) preventing a coherent plan.
- National contradiction: States compete for tech superiority while also trying to regulate it, creating internal conflict.
Containment Reimagined: A New Grand Bargain
Suleyman redefines containment as a “new kind of grand bargain,” aiming to at once harness and control the wave to build sustainable societies while avoiding catastrophe and dystopia. This means establishing “guardrails” at different levels, from local to planetary, spanning technical, cultural, and regulatory aspects. The goal is to drastically curtail negative impacts and steer nascent technologies in safer directions, ensuring that the means to shape and govern technology escalate in parallel with its capabilities. He proposes a set of questions to assess a technology’s containability, focusing on its omni-use nature, speed of dematerialization, cost and complexity, asymmetric impact, autonomy, geopolitical advantage, and resource constraints.
- Definition of containment: Power to curtail or stop negative impacts, steering development and governance of nascent technologies.
- “Guardrails” metaphor: Multilevel mechanisms for maintaining human control over technology.
- Containability assessment questions: Evaluate omni-use, dematerialization, cost, asymmetry, autonomy, geopolitical advantage, and resource constraints.
- Holistic vision: Combining various approaches to create a unified framework for containment.
The Call for Recognition and Action
The author acknowledges the difficulty in conveying the urgency of these risks, especially given the public’s perception of technology as mainly “superfluous applications” (social media, gadgets). He draws a comparison to climate change, which gained clarity through quantifiable data (CO2 ppm), a clarity currently lacking for technological risk. There’s no single metric or scientific consensus on tech threats, making it hard to form a global “we.” However, Suleyman asserts that recognition is the first step, urging humanity to calmly acknowledge the coming wave and the unavoidable dilemma. He dismisses the complacency of current elites and calls for a monumental, all-encompassing program of safety, ethics, regulation, and control, emphasizing that while containment might seem impossible, the next five years offer a critical window to shift the underlying conditions and create a chance for human flourishing.
- Difficulty in conveying urgency: Public perception of technology as superfluous hinders understanding of profound risks.
- Lack of clarity: Unlike climate change (CO2 ppm), there’s no objective metric for technological risk or consensus among experts.
- Need for a unified “we”: Current fragmented actors and divergent incentives prevent coordinated action.
- Call to action: Acknowledge the wave, address the dilemma, and begin an all-encompassing program for containment immediately.
Chapter 14: Ten Steps Toward Containment
This chapter outlines ten concrete, concentric steps towards achieving containment, starting with immediate technical interventions and expanding to broader societal and international efforts. Suleyman emphasizes that each step alone is insufficient, but collectively, they can form a “virtuous circle of mutually reinforcing measures” to sculpt the coming wave. He highlights the need for a “new grand bargain” that reconciles profit with purpose, and calls for governments to “survive, reform, and regulate” effectively.
The chapter stresses the importance of international alliances and treaties, learning from historical successes like the Montreal Protocol. It advocates for fostering a self-critical culture within the tech industry that embraces failure as a learning opportunity. Finally, Suleyman calls for a “people power” movement to demand change, guiding humanity through the “narrow path” between catastrophe and dystopia. He concludes with a sense of optimism tempered by realism, asserting that while containment looks impossible, it is humanity’s generational mission to make it so.
1. Safety: An Apollo Program for Technical Safety
Technical safety is the first item on any containment agenda, focusing on imposing constraints by design. Suleyman cites the progress in eliminating bias from Large Language Models (LLMs) through “reinforcement learning from human feedback” as an example of successful technical fixes. He calls for an “Apollo program on AI safety and biosafety,” urging massive funding (e.g., 20% of corporate R&D budgets for safety) and a tenfold increase in safety researchers. This program would focus on:
- Physical containment: Developing BSL-7 or -n labs to prevent leaks (e.g., of pathogens).
- AI “boxing”: Creating secure air gaps for rigorous testing of advanced AIs in isolated environments.
- Uncertainty communication: Designing AIs (like Inflection’s Pi) to express self-doubt and solicit human feedback.
- Explainable AI: Developing ways for models to comprehensively explain their decisions and outputs.
- “Critic AIs”: Building AIs to monitor and give feedback on other AI outputs at superhuman speeds.
- Provably beneficial AI: Systems that gingerly infer human preferences and avoid perverse outcomes.
- Robust technical constraints: Resource caps on compute, performance throttling, cryptographic protections for model weights.
- “Bulletproof off switch”: A means of closing down any technology threatening to run out of control, even if distributed and protean.
2. Audits: Knowledge is Power; Power is Control
Audits are critical to containment, ensuring meaningful oversight and verifiable compliance with safety standards. This requires transforming technical safety architecture and building new tools and techniques. Suleyman proposes:
- External scrutiny: Establishing global, formal, and routine efforts to test deployed systems.
- Proactive collaboration: Companies and researchers working with government-led audits of their work.
- “Red teaming”: Proactively hunting for flaws in AI models and software systems through controlled attacks.
- Automated monitoring: Publicly mandated AI systems designed to audit and spot problems in other AIs.
- Data set oversight: Keeping close tabs on significant data sets, bibliometrics, and publicly available harmful incidents.
- “Know your customer” checks: APIs for foundational AI services should vet users, similar to banking.
- “Scalable supervision”: Mathematically verifying non-harmful nature of algorithms through strict proofs and guaranteed records of activity.
- SecureDNA program: Global effort to plug every DNA synthesizer into a centralized, secure system for scanning pathogenic sequences.
- Verifiable entry systems: Considering cryptographic back doors controlled by independent judicial or publicly sanctioned bodies for law enforcement/regulators.
3. Choke Points: Buy Time
Choke points are strategic concentration points in the technology supply chain that can be used to slow down development and buy time for containment strategies. Suleyman highlights China’s self-identified reliance on imported “critical devices, components, and raw materials,” particularly in chips. The U.S. export controls on advanced semiconductors (October 2022), targeting high-performance computing chips and manufacturing tools, serve as a live experiment in using these levers. Key choke points include:
- Chip design and manufacturing: NVIDIA (design), TSMC (manufacturing in Taiwan), ASML (lithography machines in Netherlands) have a near monopoly on cutting-edge chips.
- Industrial-scale cloud computing: Dominated by a handful of major companies.
- Advanced AI research groups: AGI is realistically pursued by only a few well-resourced groups (e.g., DeepMind, OpenAI).
- Fiber-optic cables: Global data traffic relies on limited pinch points.
- Rare earth elements: Supply of cobalt, niobium, and tungsten is highly concentrated.
- Talent pool: The number of people working on frontier technologies is still relatively small (approx. 150,000).
- Strategic use: These choke points should be widely applied to regulate the pace of development or rollout, not just for geopolitical advantage.
4. Makers: Critics Should Build It
Those who build technology bear crystal clear responsibility for its creations, and should actively work to solve the problems it creates. Suleyman argues against “get-out-of-jail-free cards” for inevitable spread. He emphasizes that critics too must be practitioners, involved in the building process, not just shouting from the sidelines. He proposes:
- Proactive involvement: Technologists must actively work on containment, on the “front foot,” ahead of the technology.
- Critics as practitioners: Engaging in building the right technology and having practical means to change its course.
- Embracing paradox: Recognizing that building positive tools might inadvertently accelerate risks, but that engagement offers the best chance to steer.
- Cultural shift: Moving from a “just-go-for-it” engineering mindset to one that is more wary and curious about outcomes.
- Interdisciplinary perspectives: Proactively hiring moral philosophers, political scientists, and cultural anthropologists into tech development.
- Ethical AI research: Encouraging the ballooning of ethical AI publications and industry-affiliated research.
- Asilomar spirit: Self-consciously returning to establishing principles and moral limits for research (e.g., the 2017 AI principles).
- “First, do no harm” (Primum non nocere): Developing a contemporary Hippocratic Oath for technologists, emphasizing social and moral responsibility.
- Restraint: Encouraging researchers to pause, review, and be willing to stop or delay benefits for safety.
5. Businesses: Profit + Purpose
The profit motive drives the coming wave, and a pathway to safety requires reconciling profit with social purpose. Suleyman argues that traditional shareholder capitalism, with its single goal of shareholder returns, is poorly suited to containment. He proposes:
- Hybrid organizational structures: Creating new accountable and inclusive commercial models that incentivize both safety and profit.
- Ethics and safety boards: Designing governance models (like DeepMind’s proposed board) to oversee technologies.
- “Global interest company”: DeepMind’s proposal to be spun out with a legal obligation for social purpose and reinvestment of profits in public service technologies (e.g., carbon capture, nuclear fusion).
- Independent oversight: Inviting diverse external stakeholders to provide feedback and scrutiny on cutting-edge technologies.
- Facebook’s Oversight Board: A model for independent bodies advising on platform governance.
- Public benefit corporations (B Corps): Encouraging companies to adopt legally defined social missions alongside profit.
- Fiduciary duty for containment: Technology companies embedding strong containment mechanisms and goals into their legal DNA.
6. Governments: Survive, Reform, Regulate
Governments must flourish to achieve containment, requiring them to “survive, reform, and regulate” effectively. This means creating resilient social systems, welfare nets, and security architectures. Suleyman advocates for governments to:
- Directly build technology: Getting more involved in creating real technology, setting standards, and nurturing in-house capability, even if expensive.
- Monitor developments: Understanding data usage, tracking frontier research, and logging all technological harms publicly.
- Appoint technology cabinet positions: Creating a Secretary or Minister for Emerging Technology to match the scope of other essential government functions.
- Implement strong regulation: Focusing on incentives, prohibiting harmful use cases (e.g., AI for electioneering), and enforcing safety standards.
- AI Bill of Rights: Supporting and implementing principles like the White House’s blueprint for protecting public rights from AI.
- Licensing regimes: Moving to a more licensed environment for sophisticated AI systems, synthesizers, and quantum computers, with clear, binding security and safety standards.
- Overhaul taxation: Shifting tax burden from labor to capital (e.g., “tax on robots”) to fund security and welfare, cushion disruptions, and ensure fair distribution of wealth.
- Cross-border taxation: Ensuring giant businesses pay their fair share in maintaining functioning societies.
- Public dividend from tech value: Exploring mechanisms for a fixed portion of company value to be paid as a public dividend.
- Re-skilling programs: Preparing vulnerable populations and raising awareness of risks and opportunities.
7. Alliances: Time for Treaties
Since no national government can achieve containment alone, international alliances and treaties are critical. Suleyman points to historical successes like the 1995 Protocol on Blinding Laser Weapons and the Nuclear Non-proliferation Treaty as evidence that strong bans and international cooperation can work. He emphasizes that catastrophic threats are innately global and require global consensus. Key proposals include:
- Global common approach: Developing a worldwide approach to technology, akin to nuclear treaties, setting limits and building management frameworks.
- Techno-diplomacy: Fostering a “golden age” of diplomacy to navigate geopolitical tensions and achieve cooperation.
- Germ-line gene editing moratorium: Scientists collaborating on international frameworks and voluntary commitments to pause clinical uses of heritable genome editing.
- Shared bio-risk observatory: Creating a collaborative initiative between countries like China and the U.S. to monitor advanced R&D and deployed applications.
- Restraining bad actors: Common interest in restraining the uncontrolled spread of powerful technologies to terrorist groups or rogue states.
- Harmonized standards: Moving towards shared technological standards to simplify regulation and enhance safety.
- Global institutions: Proposing a new kind of global institution, like an “AI Audit Authority (AAA),” focused on fact-finding, auditing model scale, and capability thresholds, potentially leading to a non-proliferation treaty.
8. Culture: Respectfully Embracing Failure
Effective containment requires a self-critical culture that embraces failure as a learning opportunity. The aviation industry’s impressive safety record (one death per 7.4 million passenger boardings) is attributed to a vigorous culture of learning from mistakes and sharing best practices across competitors. Suleyman argues that the tech industry’s fear of public opprobrium and fierce competition leads to secrecy, hindering learning. He proposes:
- Openness about failures: Individuals and organizations immediately self-reporting problems, met with praise, not insults.
- Shared learning: Proactively sharing insights about novel risks across the industry, similar to cybersecurity’s knowledge sharing of zero-day attacks.
- Self-critical mindset: A culture that welcomes regulators and where technologists want to learn from them.
- Asilomar spirit: Returning to the principles set by the Asilomar Conference on Recombinant DNA (1975) and the AI principles (2017) to establish responsible research cultures.
- Contemporary Hippocratic Oath: A moral lodestar for technologists, emphasizing “first, do no harm” and actively working to enact it.
- Precautionary principles: Pausing before building or publishing, relentlessly course correcting, and being willing to stop development.
- Purpose beyond profit: A culture happy to leave “fruit on the tree” and delay benefits for safety.
- Researcher responsibility: Researchers recognizing their work’s societal impact and stepping back from a constant rush to publication.
9. Movements: People Power
The success of containment ultimately relies on building a functional “we”—a critical public mass agitating for change. Suleyman acknowledges that technology concerns are often elite pursuits, but argues that people care about emerging tech risks when introduced to the topic. He draws parallels to historical movements (abolition of slavery, women’s suffrage, civil rights) and climate activism, which achieved monumental change through broad-based coalitions and popular pressure. He emphasizes that neither technologists nor governments can solve this problem alone. Key actions include:
- Grassroots activism: Supporting burgeoning civil society movements highlighting tech problems.
- Media engagement: Proactive involvement from media, trade unions, and philanthropic organizations.
- Citizen assemblies: Hosting lotteries to choose representative samples of the population to debate and propose technology management strategies.
- Unified voice: Empowering a critical public mass to speak clearly and demand alignment of approaches across stakeholders.
- Founder/builder engagement: Inspiring founders and builders to energize these movements rather than stand in the way.
10. The Narrow Path: The Only Way Is Through
The final step emphasizes coherence—ensuring that all containment elements work in harmony, forming a “virtuous circle of mutually reinforcing measures.” Suleyman likens containment to a “narrow and treacherous path” between catastrophe and dystopia, drawing on Daron Acemoglu and James Robinson’s “shackled Leviathan” metaphor for liberal democracies. This path requires constant balance, pushing far enough for protection but resisting overreach. He highlights Kevin Esvelt’s “delay, detect, and defend” biosecurity program as a holistic example. Key takeaways include:
- Interlocking countermeasures: Guardrails layered from international treaties to supply chain reinforcement, working in concert.
- “Delay, detect, defend”: Esvelt’s biosecurity model proposes a “pandemic test-ban treaty,” DNA screening, and resilient national preparedness.
- Prohibition on open-source for powerful AI/bio: Banning the sharing and deployment of powerful AI models and synthetic organisms without rigorous due process.
- Acceptance of greater oversight: Technologists and the public accepting increased regulation and policing of the internet, DNA synthesizers, and AGI research.
- Resisting overreach: Recognizing that complete surveillance and closure are disastrous forms of dystopia that must be resisted.
- Dynamic equilibrium: Containment is not a static destination but an ongoing process of maintaining balance between openness and closure.
- A call to action: Despite the immense challenges and inherent uncertainties, fighting for the secure, long-term flourishing of humanity.
Life After the Anthropocene
Suleyman concludes by reflecting on the Luddite movement of the 19th century, a historical parallel to society’s resistance against transformative technology. He recounts how weavers, whose livelihoods were destroyed by the power loom, fought back against mechanization and the “satanic mills,” ultimately losing their campaign as industrial technologies relentlessly diffused. While the Luddites’ pain was real, their descendants ultimately benefited from the prodigious improvement in living standards brought by the Industrial Revolution.
The author argues that while humanity adapted then, the challenge today is to claim the benefits of the coming wave without being overwhelmed by its harms, and to ensure that technology is “adapted to people” from the start, rather than being foisted upon them. He envisions a future where daily interactions are primarily with AIs, where factories grow outputs locally, robots are ubiquitous, and the human genome is elastic. The goal is to sculpt the wave—not stop it—to amplify the best of humanity, open new pathways for creativity, and foster a happier, healthier future on our terms. This requires an intensified, unprecedented, all-too-human grip on the entire technosphere, a monumental generational challenge for the secure, long-term flourishing of our species.





Leave a Reply