How Innovation Works: Complete Summary of Matt Ridley’s Blueprint for Driving Progress

Introduction: The Infinite Improbability Drive

Matt Ridley’s “How Innovation Works” proposes that innovation is the most crucial, yet least understood, aspect of the modern world. It’s the primary driver behind unprecedented prosperity and the global decline of extreme poverty. Ridley challenges the conventional view that innovation is a sudden, genius-driven phenomenon, arguing instead that it is a gradual, incremental, and collective process often fueled by serendipity and trial and error. The book aims to unravel this mystery by exploring the history of various innovations, from ancient human advancements to modern technological breakthroughs.

Ridley introduces the concept of innovation as an “infinite improbability drive,” drawing a parallel to Douglas Adams’s fictional device. He posits that innovations create improbable order from existing components by expending energy, much like biological life. This process is inherently continuous, always seeking ways to achieve more efficiently or with fewer resources. The book explores how ideas evolve from mere inventions into widely adopted practices, emphasizing that innovation is far more than just invention.

The author sets out to explain why innovation happens, how it occurs, and why its timing and location are often unpredictable. He contrasts the common perception of heroic, singular inventors with the reality of collaborative, often messy, and frequently frustrating collective endeavors. Ridley promises to illustrate these points through engaging stories across diverse fields like energy, public health, transport, food, low technology, and computing, ultimately arguing for the importance of freedom and permissionless experimentation in fostering future progress.

Chapter 1: Energy

This chapter explores the history of energy innovation, focusing on how humans learned to convert heat into work, a pivotal breakthrough that enabled the Industrial Revolution. Ridley argues that this transition, while foundational, was a gradual and often anonymous process, challenging the notion of singular heroic inventors.

The Heat-to-Work Transition: A Gradual Evolution

Before 1700, human societies primarily used heat for warmth and cooking, and muscle, wind, or water for mechanical work. The key innovation was to combine these, making heat perform mechanical work. This transition was not marked by a single eureka moment but by a series of incremental improvements. Ridley highlights Thomas Newcomen’s engine as the first practical device to achieve this, but emphasizes that Newcomen himself is a mysterious figure with no known portrait or marked grave, reflecting the often unrecognized nature of early innovators.

The Unsung Heroes of Steam Power

Ridley delves into the contributions of Denis Papin and Thomas Savery, two other figures who experimented with steam. Papin, a renowned scientist of his time, first envisioned harnessing steam for practical purposes like pumping water and even powering boats. Savery secured an early patent for a steam-powered water-raising machine in 1698, though it proved inefficient and unreliable. The coincidence of their ideas emerging simultaneously suggests that the underlying conditions—better metalworking, interest in mining, and scientific fascination with vacuums—made such inventions almost inevitable. Despite their scientific prowess, neither Papin nor Savery successfully innovated the steam engine into a widespread, practical tool.

Newcomen’s Practical Breakthrough and the Role of Serendipity

Thomas Newcomen, a humble blacksmith from Devon, developed the first truly practical steam engine in 1712. Ridley notes the scarcity of information about Newcomen’s process, suggesting he may have developed his design independently of Papin and Savery. A key breakthrough, described by Mårten Triewald, was the accidental discovery of cold-water injection directly into the cylinder, which dramatically improved efficiency by creating a vacuum. Newcomen’s engine, though inefficient by modern standards, was revolutionary for pumping water out of coal mines, where fuel was cheap. This innovation, despite its crude nature, marked the mother of the modern world, enabling the fantastic productivity that would follow.

James Watt and the Separate Condenser

James Watt entered the scene in 1763, realizing that Newcomen engines wasted enormous energy by reheating the cylinder in each cycle. His simple yet crucial idea was to use a separate condenser to keep the cylinder hot while steam condensed elsewhere. This significantly improved efficiency, but Watt, like many inventors, spent years perfecting the metalwork. His partnership with Matthew Boulton brought his invention to commercial viability. Watt was a zealous defender of his patents, a practice that Ridley argues may have hindered the broader adoption and improvement of steam technology. The rapid expansion of steam engine applications after Watt’s main patent expired in 1800 highlights how intellectual property can sometimes stifle, rather than accelerate, innovation.

Thomas Edison and the Invention Business

The light bulb serves as a symbolic example of innovation as a gradual, collective, and incremental process, rather than a singular invention. Ridley points out that Joseph Wilson Swan in Britain and twenty-one other individuals worldwide had already developed or significantly improved incandescent light bulbs before Edison. Edison’s genius lay not in being the first inventor, but in his “invention business” at Menlo Park, where he systematically assembled teams of skilled craftsmen and scientists to turn ideas into practical, reliable, and affordable realities.

Edison’s approach was characterized by relentless trial and error, conducting thousands of experiments, including testing over 6,000 plant materials to find the ideal filament. His success demonstrates that innovation is 1% inspiration and 99% perspiration, emphasizing the importance of systematic effort and commercialization over isolated breakthroughs. The light bulb’s eventual ubiquity dramatically improved living standards by making artificial light cheap and accessible.

The Short-Lived Reign of Compact Fluorescents and the Rise of LEDs

The transition from incandescent bulbs highlights how government intervention can misdirect innovation. Governments globally mandated a shift to compact fluorescent bulbs (CFLs) around 2010 to reduce carbon emissions, despite their unpopularity due to slow warm-up times, shorter lifespans, and disposal hazards. This coerced adoption was a “foolish” misinnovation, costing billions.

Ridley argues that had governments waited, a far superior alternative, light-emitting diodes (LEDs), would have naturally emerged. LEDs, with roots in early 20th-century discoveries, became commercially viable after Shuji Nakamura developed the blue LED in 1993. Their eventual affordability and efficiency, achieved through two decades of incremental improvement, demonstrate the power of market-driven innovation over top-down mandates. LEDs now offer significant power savings and new applications like indoor farming, proving that innovation, when allowed to evolve naturally, often yields superior and more sustainable solutions.

The Ubiquitous Turbine: Charles Parsons and Continuous Improvement

The steam turbine, largely based on the design of Charles Parsons, is another foundational energy innovation that powers modern electricity generation, navies, and aircraft. Parsons, the son of a wealthy Irish peer, was a brilliant engineer who designed his steam turbine in 1884. He recognized the efficiency of using multiple turbines in series and the advantages of reaction turbines over impulse designs. His relentless work, often involving trial and error, led to significant improvements in dynamos for electricity generation.

Parsons’s audacious stunt in 1897, demonstrating his turbine-powered ship Turbinia at top speed during Queen Victoria’s Diamond Jubilee fleet review, forced the Royal Navy to adopt his technology. Ridley emphasizes that the turbine’s history is one of profound gradualism, with continuous improvements building on earlier inventions from figures like Alessandro Volta, Michael Faraday, and Werner von Siemens. Modern combined-cycle gas turbines achieve 60% efficiency, a steady progression from Parsons’s initial 2%, illustrating that innovation is a collective effort of many brains over time, rather than a series of sudden breakthroughs.

Nuclear Power and the Phenomenon of Disinnovation

Nuclear power stands as the 20th century’s only large-scale innovative energy source, a triumph of applied science leading from nuclear fission to global power plants. However, its story since the 1970s is one of “disinnovation,” where the industry is in decline and its technology has stalled. This is primarily due to relentless cost inflation, driven by increasing safety regulations. Ridley argues that the nuclear industry remains insulated from “learning by doing” or trial and error because errors are catastrophic and trials are prohibitively expensive.

This regulatory environment has forced nuclear plants to be built as one-off “Egyptian pyramids,” preventing the cost reductions and performance improvements seen in other technologies. Despite numerous innovative reactor designs (like liquid-metal and liquid-salt reactors that are inherently safer and more efficient), costly over-regulation has funneled all investment into the conventional pressurized-water reactor, ironically prolonging the life of older, less safe plants like Fukushima. Ridley suggests that nuclear power, if developed with less military urgency and more focus on modular, mass-produced designs, might have avoided its current stagnation, underscoring how excessive caution can stifle progress.

Shale Gas Surprise: Deregulation and Accidental Breakthrough

The rise of natural gas, particularly from shale, is presented as one of the 21st century’s most surprising energy innovations, defying earlier predictions of scarcity. This revolution occurred in the US, driven by two main factors: deregulation under President Reagan, which stimulated gas exploration, and technological innovation. The key breakthrough was “slick-water” hydraulic fracturing, combined with horizontal drilling.

Ridley highlights the role of George Mitchell, an entrepreneur who relentlessly invested in trying to extract gas from the Barnett shale. The pivotal moment came in 1996 when Mitchell employee Nick Steinsberger accidentally discovered that dilute, “slick water” worked better than stiff gels for fracking shale. This accidental finding, followed by persistent trial and error, not only dramatically cut costs but doubled productivity. The success of shale gas in the US is attributed to property rights (mineral rights belonging to landowners), a competitive, entrepreneurial “wild-cat” industry, and deep risk capital. This bottom-up innovation has made the US the world’s largest producer of both gas and crude oil, reducing emissions by displacing coal and underscoring how serendipitous discoveries and market freedom can rapidly transform an industry.

The Reign of Fire: The Enduring Importance of Heat-to-Work Conversion

Ridley concludes by emphasizing that the controlled conversion of heat to work remains the foundation of most modern energy systems, supplied primarily by fossil fuels (85% of primary energy). This “impellent use of fire,” originating around 1700, revolutionized human prosperity by enabling the creation of ever more improbable and complex material structures.

The chapter underlines the collaborative and incremental nature of innovation, noting that while individuals like Newcomen, Watt, Edison, Parsons, and Steinsberger played crucial roles, they were “stones in an arch or links in a chain.” The benefits of their innovations were largely reaped by society at large, not just their direct descendants. Ridley acknowledges the future potential of nuclear fusion as a clean energy source, but warns that like nuclear fission, its success will depend on overcoming cost barriers through mass production and learning by doing, rather than through top-down government mandates.

Chapter 2: Public Health

This chapter explores several key public health innovations, demonstrating how progress often comes from unlikely sources, faces fierce resistance, and is driven by practical application before scientific understanding.

Lady Mary’s Dangerous Obsession with Inoculation

The story begins with Lady Mary Wortley Montagu, a wealthy, intelligent Englishwoman who, after being scarred by smallpox, became a passionate advocate for inoculation in the early 18th century. She did not invent inoculation, but observed its practice in Ottoman society while her husband was ambassador to Constantinople. The method involved exposing healthy individuals to a mild form of smallpox by scratching pus into the skin, conferring immunity.

Despite earlier reports to the Royal Society being dismissed as dangerous superstition, Lady Mary’s firsthand account and her brave decision to inoculate her own son and daughter made her a pivotal innovator. She faced fierce denunciation from medical experts who deemed the practice barbaric and unscientific. This episode highlights resistance to innovation based on prejudice (including misogyny), lack of scientific understanding, and fear of the unknown. The eventual adoption of inoculation, and later vaccination (credited to Edward Jenner, though the practice existed before him), exemplifies how practical efficacy often precedes scientific explanation.

Pasteur’s Chickens and the Germ Theory of Disease

The conquest of smallpox by vaccination serves as a prime example of use preceding understanding. It wasn’t until the late 19th century that Louis Pasteur began to explain how vaccines worked, leading to the germ theory of disease. Pasteur’s breakthrough was spurred by a serendipitous accident in the summer of 1879. His assistant, Charles Chamberland, accidentally left chicken cholera broth to go stale while Pasteur was on holiday.

Upon returning, they found the stale broth made chickens ill but didn’t kill them. Intrigued, Pasteur then injected these same chickens with a virulent strain of cholera, and they remained healthy. This demonstrated that exposure to a weakened (attenuated) organism could trigger an immune response against a stronger one. While Pasteur didn’t understand the human immune system, his discovery provided the scientific basis for vaccination, turning what was once a folk practice into a scientifically validated medical intervention.

The Chlorine Gamble That Paid Off

The widespread use of chlorination in public water supplies is another innovation that dramatically improved public health by eradicating diseases like typhoid and cholera. Ridley describes the pivotal moment in 1908, when Dr. John Leal, sanitary adviser to the Jersey City Water Supply Company, daringly decided to drip chloride of lime into the city’s water supply to comply with a court order for “pure and wholesome water.”

Leal’s decision was brave and risky, as adding chemicals to drinking water was widely reviled. He proceeded without seeking public permission, arguing its safety and effectiveness in court. The success of the Jersey City case led to the rapid adoption of chlorination across the US and globally. Ridley traces the idea’s origin through a similar experiment in Lincoln, England, and earlier suggestions by Vincent Nesfield in India, and the general use of chloride of lime as a disinfectant. This messy, incremental evolution from various, sometimes mistaken, practices highlights how a disruptive and life-saving innovation can emerge from a confused and enigmatic past, with practical application often preceding full scientific endorsement or public acceptance.

Pearl and Grace Never Put a Foot Wrong: Whooping Cough Vaccine

The story of the whooping cough (pertussis) vaccine in the 1930s highlights the dedication and meticulous work of ordinary, yet extraordinary, women. Pearl Kendrick and Grace Eldering, both former teachers turned bacteriologists at the Michigan state public health laboratory, took on the challenge of developing an effective vaccine for the most lethal childhood disease in America. They started by developing a reliable “cough plate” test to identify infectious individuals, often working long hours in impoverished homes.

Driven by a desire to alleviate suffering, they systematically developed a safe and effective killed-bacterium vaccine using standard techniques. Their rigorous and ethically sound trials (using non-vaccinated controls from welfare statistics, rather than depriving orphans) convinced skeptics. Their ability to secure a visit from Eleanor Roosevelt led to crucial funding. Kendrick and Eldering freely shared their methods and formulae globally, receiving little personal recognition or financial reward. Their work exemplifies innovation as a product of relentless hard work, collaboration with communities, ethical conduct, and open knowledge sharing, resulting in a rapid and permanent decline in pertussis incidence and mortality worldwide.

Fleming’s Luck and the Development of Penicillin

The discovery of penicillin illustrates the role of serendipity in innovation, followed by a long, collaborative struggle to turn a discovery into a practical cure. In August 1928, Alexander Fleming returned from holiday to find a Penicillium mold growing on a petri dish, inhibiting the growth of Staphylococcus bacteria. The unusual London weather at the time (cold, then hot) was crucial for this phenomenon to occur. Fleming was intrigued, but his initial experiments were disappointing, and he was influenced by his mentor, Sir Almroth Wright’s, skepticism about chemical cures. Penicillin remained a laboratory curiosity for over a decade.

The true innovation came with Ernst Chain and Howard Florey at Oxford, who, along with Norman Heatley, developed methods to extract and purify penicillin in quantities sufficient for human trials. Their pivotal moment came in May 1940, when penicillin cured mice of streptococcal infection. The first human trial on Albert Alexander in 1941, though ultimately unsuccessful due to limited supply, demonstrated the drug’s miraculous effect. The Second World War spurred large-scale production efforts, particularly in the US. The story highlights that scientific discovery is only the first step, and tremendous practical work, collaboration, and overcoming commercial reluctance are needed to achieve widespread innovation.

The Pursuit of Polio: Innovation and Its Unintended Consequences

The pursuit of a polio vaccine in the 1950s highlights both monumental success and significant ethical complexities, including unintended consequences and regulatory challenges. The rising polio epidemic in the US, ironically worsened by improved sanitation (which delayed exposure, leading to more virulent infections in older children), created immense public demand for a vaccine. Jonas Salk’s inactivated vaccine, developed by growing the virus in monkey kidney tissue cultures and inactivating it with formaldehyde, was rushed into trials in 1955.

However, Dr. Bernice Eddy at the Division of Biological Standards raised concerns, finding the vaccine could sometimes cause polio in monkeys. Her warnings were largely ignored, leading to the “Cutter incident” where thousands were infected and hundreds paralyzed due to inadequately inactivated virus. Eddy later made a groundbreaking discovery: monkey kidney cultures could transmit cancer-causing viruses (SV40) to hamsters, raising fears about long-term risks to vaccinated humans. She was effectively sidelined and forbidden to speak about her findings. While no epidemic of unusual cancers emerged, SV40 DNA has been found in human tumors, illustrating how rushing innovation, suppressing critical research, and overlooking safety signals can lead to profound ethical dilemmas. Despite this, the global polio eradication effort, combining inactivated and live oral vaccines, has achieved over 99.99% success, largely eliminating polio from the world.

Mud Huts and Malaria: Insecticide-Treated Bed Nets

The story of malaria control provides a powerful example of a simple, low-tech innovation (insecticide-treated bed nets) having a monumental impact on global health. By the 1980s, malaria was a leading killer, and existing interventions were faltering. In 1983, in Burkina Faso, French and Vietnamese scientists Frédéric Darriet and Pierre Carnevale conducted a beautifully simple, yet groundbreaking, experiment using permethrin-treated mosquito nets.

Their meticulous study demonstrated that treated nets, even torn ones, repelled mosquitoes, reduced their entry into huts, decreased blood-feeding, and increased mosquito mortality. This research, though not widely celebrated, proved to be the “magic bullet” against malaria vectors. The widespread adoption of insecticide-treated bed nets began in 2003, directly correlating with a rapid and sustained decline in malaria mortality globally. By 2010, over a billion nets had been used, saving millions of lives. This highlights how rigorous, focused experimentation on seemingly simple solutions can yield profound public health benefits, even when initial ideas might seem obvious or low-tech.

Tobacco and Harm Reduction: The Rise of Vaping

The chapter concludes with a contemporary example: the harm reduction strategy in tobacco consumption through vaping (electronic cigarettes). Smoking is the modern world’s greatest killer, but addiction makes it resistant to simple cessation efforts. Hon Lik, a Chinese chemist and smoker, invented the first modern e-cigarette around 2000, seeking a less harmful way to get nicotine. His innovation involved vaporizing liquid nicotine, offering a technological alternative to combustion.

While the invention came from China, its widespread adoption as a public health tool has been most successful in Britain. This is attributed to the UK government’s “nudge unit,” which, under David Halpern’s leadership, explicitly decided against banning e-cigarettes. Instead, they opted to encourage their availability and regulate them to improve quality and reliability. This contrasts sharply with the US and Australia, where vaping faced official discouragement or outright bans, often due to lobbying from tobacco companies (fearing competition) and pharmaceutical companies (protecting nicotine replacement therapy sales), as well as public health groups (concerned about a “new” form of smoking). The UK’s pragmatic harm reduction approach, despite initial fierce opposition, has accelerated the decline of smoking. This illustrates how policy choices and regulatory environments profoundly shape the success and societal benefit of innovations, particularly when they challenge entrenched interests or public perceptions.

Chapter 3: Transport

This chapter traces revolutionary innovations in transport, from the locomotive to jet engines, highlighting the incremental nature of progress and the interplay of invention, engineering, and market forces.

The Locomotive and Its Line: George Stephenson’s Practical Genius

Before the 1820s, human travel speed was limited to that of a galloping horse. The locomotive dramatically changed this, becoming three times faster within a generation. Ridley credits George Stephenson, a humble colliery worker, as a pivotal innovator. Stephenson, initially known for repairing Newcomen engines, realized the need for simultaneous innovation in both engines and rails. He improved on early, unreliable locomotive designs like Richard Trevithick’s.

Stephenson’s Blücher (1814) could haul significant loads, but his greatest contribution was his work on the Stockton to Darlington railway (1825), the first public railway line. Here, he championed John Birkenshaw’s wrought-iron rails over cast iron, which could better withstand locomotive weight. His most famous design, the Rocket (1829), incorporated innovations from Henry Booth, like multiple fire tubes and a blast-pipe, setting the basic design for decades. The railway boom that followed in the 1840s, though fueled by competitive bubbles and fraud, profoundly connected Britain and was rapidly adopted worldwide, demonstrating how practical improvements and infrastructure development drive widespread adoption.

Turning the Screw: The Propeller’s Triumph Over Paddle Wheels

The advent of steam engines on ships coincided with locomotives, but it wasn’t until the screw propeller replaced paddle wheels in the mid-19th century that ocean-going steam truly challenged sail. The idea of the screw propeller had a long prehistory, with numerous patents filed. However, Francis Smith, a farmer, built a working model in 1835 and patented it. A serendipitous accident with his first full-scale boat, where a part of the screw broke off, revealed that a single turn was more efficient, illustrating the role of unintended discoveries in optimization.

Simultaneously, John Ericsson in Sweden (and later the US) also patented a similar device. Despite early Admiralty skepticism, Smith’s ship Archimedes proved the screw’s worth in rough weather and against paddle steamers. Navies quickly switched to screws, recognizing their superior efficiency and safety. The continuous evolution of screw design over decades, as understanding of turbulence and drag improved, further emphasizes the incremental and collaborative nature of innovation, driven by practical application and ongoing refinement.

Internal Combustion’s Comeback: From Luxury to Mass Market

The internal combustion engine (ICE) also shows a pattern of long prehistory, initial failures, and eventual triumph. Early attempts like Isaac de Rivaz’s (1807) were too inefficient to compete with steam. While steam cars and even electric cars dominated the early 20th-century motor market, the ICE eventually conquered all. The central invention was Nikolaus Otto’s four-stroke cycle (1876), which significantly improved efficiency through compression and ignition.

While Otto focused on stationary engines, his employees Gottlieb Daimler and Wilhelm Maybach adapted the ICE for cars. Karl Benz produced the first complete car in series production (1886). Crucial innovations in car design, like the front-mounted engine and radiator, came from Émile Levassor in France. The Mercedes 35hp (1900) set the standard for modern car design. However, it was Henry Ford’s genius for cost control and mass production with the Model T that transformed the car from a luxury item into an affordable utility for ordinary people. This demonstrated that true innovation often lies in making technology accessible and ubiquitous, rather than just inventing it.

The Tragedy and Triumph of Diesel: Powering Globalization

Rudolf Diesel is presented as an unusual innovator: a man driven by social justice, who died tragically before seeing the full success of his invention. Diesel, unlike many engineers, started from scientific principles, aiming to create a highly efficient engine based on the Carnot cycle. By 1897, with the help of Heinrich von Buz, he had a working engine that was twice as efficient as gasoline engines. However, turning his invention into a reliable and affordable product proved immensely difficult due to the challenges of high-pressure operation.

Despite Diesel’s personal disillusionment and death in 1913, his engine went on to “run the world.” Today, vast diesel engines power almost all large cargo ships, making global trade possible and playing a larger role in globalization than political agreements. Smaller diesels drive trucks, trains, and farm machinery. Ridley notes that the adoption of diesel cars in Europe in the early 21st century, driven by climate concerns, was later reversed due to urban air quality issues, illustrating how societal priorities and unintended consequences can shift the trajectory of technological adoption.

The Wright Stuff: The Gradual Dawn of Powered Flight

The first powered, controlled flight by the Wright brothers in December 1903 is often seen as a sudden breakthrough, but Ridley argues it was a culmination of years of systematic, incremental work. Unlike Samuel Langley, whose government-funded “aerodrome” failed spectacularly, the Wrights, as experienced bicycle makers, methodically worked through challenges. They drew upon the experiences of others, particularly Otto Lillienthal and Octave Chanute, and conducted thousands of wind tunnel experiments to understand lift and drag.

Their approach was one of relentless trial and error, with frequent crashes teaching them how to pilot. Crucially, they left the motor to last, relying on their mechanic Charlie Taylor to build a lightweight engine. Their success was due to dedication, hard work, and a systematic, iterative process, not a singular flash of genius. Despite their initial success being largely ignored, Wilbur Wright’s demonstrations in France in 1908 finally convinced the world, spurring a rapid race in aviation. The story highlights how true innovation is a long slog of persistent effort, and how initial skepticism often surrounds breakthroughs that defy conventional wisdom.

International Rivalry and the Jet Engine: Simultaneous Innovation

The development of the jet engine exemplifies simultaneous invention driven by international rivalry. Frank Whittle in Britain and Hans Joachim Pabst von Ohain in Germany independently conceived and patented jet engine designs in the 1930s. Both faced immense engineering challenges, particularly in developing materials that could withstand extreme temperatures and pressures. Whittle’s work was initially hampered by lack of funding and skepticism, leading him to let his patent lapse before reviving the project with private backing.

Ohain’s engine achieved the first jet-powered flight in August 1939, just days before World War II began, with Whittle’s following in May 1941. Both countries had jet fighters in combat by 1944. Ridley argues that while Whittle (and Ohain) were brilliant pioneers, the jet engine would have emerged anyway due to the underlying technological maturity. The post-war era saw continuous, incremental improvements by large companies like Pratt and Whitney, General Electric, and Rolls-Royce, illustrating how innovation progresses through collective effort and ongoing refinement, often driven by intense competition.

Innovation in Safety and Cost: The Airline Industry’s Success

The airline industry’s remarkable safety record is a testament to gradual but pervasive innovation. In 2017, for the first time, there were no commercial passenger jet fatalities, despite a record 37 million flights. This is a 54-fold decline in fatalities per passenger-kilometer since 1970. This improvement was achieved not through single breakthroughs, but through a multitude of “dull, low-tech but vital practices” like crew resource management, extensive checklists, cross-checking, and a culture of open learning from mistakes.

Ridley cites the 1992 Air Inter crash as an example where multiple factors (computer error, poor communication, lack of warning systems) highlighted areas for improvement. The key has been the transparent sharing of accident investigations globally. This safety revolution coincided with the democratization of air travel through deregulation and falling prices, championed by innovators like Herb Kelleher (founder of Southwest Airlines). Kelleher’s relentless fight against cartels and his focus on cost-cutting and customer experience paved the way for the budget airline revolution. This demonstrates that safety improvements can co-exist with increased accessibility and efficiency, and that competition and learning from errors drive true progress.

Chapter 4: Food

This chapter explores how innovations in food production have transformed human societies, addressing issues like famine, land use, and the evolution of human diets. Ridley emphasizes that these changes are often gradual, meet resistance, and rely on recombination and scientific understanding that follows practical application.

The Tasty Tuber: The Potato’s Slow Diffusion

The potato, native to the Andes, became a crucial innovation in the Old World, but its adoption was slow and met with significant resistance. Despite its superior energy yield per acre compared to grain, it faced challenges due to its adaptation to 12-hour days (initially producing poorly in Europe’s longer summers) and deep-seated prejudice. Clergymen in England forbade its consumption for not being mentioned in the Bible, leading to absurd claims like “No potatoes, no popery!”

The French parliament even banned potato cultivation for human food in 1748 due to pseudoscientific fears of it causing leprosy. Its eventual widespread adoption in places like Ireland and Lancashire was driven by its reliable harvests in wet years when grain crops failed. War also played a role, as potatoes, being underground, often survived army depredations, leading farmers to prefer them. Antoine-Augustin Parmentier in France became a key innovator, using publicity stunts and scientific arguments to popularize the potato in the late 18th century, demonstrating how persistent advocacy can overcome cultural inertia. The Irish potato famine of 1845 exposed the vulnerability of over-reliance on a single crop, leading to devastating consequences. Modern innovation continues, with gene-edited blight-resistant potato varieties now reducing the need for extensive spraying.

How Fertilizer Fed the World: The Haber-Bosch Process

The Haber-Bosch process, invented by Fritz Haber (1908) and scaled by Carl Bosch, stands as one of the most vital innovations in human history, enabling “bread from air” by fixing nitrogen from the atmosphere to make ammonia for fertilizer. Before this, nitrogen was a limiting nutrient for crops, reliant on manure, legumes, or natural deposits like guano and Chilean saltpetre. These natural sources fueled temporary booms but were unsustainable.

Sir William Crookes’s 1898 warning of looming global starvation spurred the race to synthesize nitrogen. Haber, despite anti-Semitic discrimination, persevered in experiments, eventually discovering that osmium (and later an iron-aluminium-calcium mixture scaled by Bosch) acted as an effective catalyst under high pressure. Bosch’s monumental engineering effort at BASF, borrowing ideas from diverse industries (locomotives, steelmaking, cannon designs), transformed Haber’s laboratory invention into a commercial factory process, overcoming immense challenges like hydrogen embrittlement. This collaboration between science and industry, driven by persistent trial and error, averted predicted famines and now accounts for half the nitrogen in human food. It is an example of innovation addressing a critical global need and the ingenuity required to scale a scientific breakthrough.

Dwarfing Genes from Japan: The Green Revolution’s Foundation

The dwarfing genes in wheat were a crucial innovation that, when combined with synthetic fertilizer, powered the Green Revolution. This story highlights a long, multinational, and incremental path of plant breeding. It began in Japan in 1917, when an unknown breeder at the Central Agricultural Experiment Station crossed an American wheat variety with a native Japanese dwarf variety (Daruma) to produce Fultz-Daruma. This led to Nôrin-10, a dwarf wheat variety marketed in 1935.

Cecil Salmon, an American agronomist serving in Japan post-WWII, sent Nôrin-10 samples back to the US. Orville Vogel at Washington State University used Nôrin-10 to address the “lodging” problem (wheat collapsing under its weight due to fertilizer). He cross-bred it, seeking shorter, stronger stems. Norman Borlaug, working in Mexico to find rust-resistant, high-yield wheats, learned of Vogel’s work and incorporated Nôrin-10 derivatives into his breeding program. Borlaug’s relentless efforts, including growing two generations a year, led to short-strawed, high-yielding, rust-resistant varieties that transformed Mexican agriculture. Despite fierce resistance from bureaucrats and social scientists in India and Pakistan (who feared disease, dependence, or social disruption), Borlaug, with support from ministers, convinced them to adopt his wheats. This “fifty-year story” of genetic recombination and persistent advocacy led to India and Pakistan achieving grain self-sufficiency, averting mass starvation.

Insect Nemesis: Bacillus Thuringiensis (Bt) Crops

The development of insect-resistant crops using Bacillus thuringiensis (Bt) genes is a prime example of how scientific discoveries, originating from unlikely sources, can be transformed into environmentally beneficial agricultural innovations. The story begins in 1901 with Ishiwata Shigetane, a Japanese biologist investigating a silkworm disease, who identified the bacterium that would later be named Bacillus thuringiensis (Bt) by Ernst Berliner in 1909. Bt produces a protein lethal to caterpillars of moths and butterflies.

By the 1930s, Bt was available as a natural, sprayable insecticide for organic farmers. The breakthrough for widespread agricultural use came with Marc Van Montagu and Jeff Schell in Belgium, who in 1974 discovered how Agrobacterium tumefaciens could insert its DNA into plants. By manipulating this mechanism, they (along with others) developed a way to insert any gene into a plant, including the Bt insecticidal gene. In 1987, they created a tobacco plant with the Bt gene, lethal to pests. This led to Bt cotton and maize, which are inherently insect-resistant, reducing the need for chemical sprays. Today, over 90% of cotton and one-third of maize are Bt, leading to reduced pesticide use, increased wildlife on farms, and a “halo effect” benefiting non-Bt crops. Despite its success and safety, this technology faced immense opposition in Europe due to ideological anti-GMO campaigns.

Gene Editing Gets Crisper: CRISPR Technology

The CRISPR gene-editing technique is a revolutionary innovation with immense potential in agriculture and medicine, but its origins are characterized by frenzied credit disputes and a long, winding road from obscure microbial biology to groundbreaking technology. Ridley highlights that the credit for CRISPR extends beyond the prominent US university groups (Jennifer Doudna’s at Berkeley and Feng Zhang’s at MIT) who made crucial breakthroughs around 2012.

The true story begins with microbiologists working on practical problems. Francisco Mojica in Spain, studying salt-loving microbes in 1993, noticed repeated, palindromic DNA sequences interspersed with unique “spacer” sequences. In 2003, he discovered these spacers matched viral DNA, leading him to hypothesize that CRISPR was a microbial immune system that recognized and cut viral sequences. Simultaneously, Philippe Horvath, an industrial microbiologist at Danisco (making yogurt and cheese), independently discovered that bacteria with more spacers were more resistant to viral infections. This confirmed Mojica’s hypothesis. The leap of logic was then to realize that this microbial system could be borrowed and repurposed as a precise genetic-engineering tool. This “gene editing” allows for precise alterations to DNA, as demonstrated by gene-edited pigs resistant to viruses or potatoes that don’t brown. Despite its immense promise and rapid adoption globally, Europe’s strict regulation of gene-edited organisms (treating them like GMOs) has significantly stifled its development on the continent, leading to a widening innovation gap.

Land Sparing Versus Land Sharing: Intensification’s Environmental Benefits

Ridley argues that the immense improvements in agricultural yields from mechanization, fertilizer, new crop varieties, pesticides, and genetic engineering have not only banished famine and reduced malnutrition but have also, counterintuitively, been beneficial for nature. This is due to “land sparing,” where increased productivity per acre means less land is needed for farming, thus preserving forests, wetlands, and nature reserves. Between 1960 and 2010, the acreage needed to produce a given quantity of food declined by 65%.

This has led to a steady increase in wild land and forest cover in many places. Ridley refutes the idea of “land sharing” (low-yield farming hoping wildlife thrives alongside crops), arguing that intensive agriculture not only uses less land but also produces fewer pollutants, causes less soil loss, and consumes less water than extensive or organic systems for the same food yield. He suggests that continued innovation in crop yields (e.g., tweaking photosynthesis, nitrogen fixation, pest resistance) could lead to even greater land sparing by 2050, allowing for the expansion of national parks and wilderness areas, enhancing the planet’s ecology while feeding a growing population.

Chapter 5: Low-Technology Innovation

This chapter delves into innovations that, while not involving complex scientific breakthroughs, have had profound impacts on human life due to their simplicity, practicality, and widespread adoption.

When Numbers Were New: The Power of Zero

The adoption of Indian numerals, modern arithmetic, and crucially, the concept of zero in Europe around 1202, revolutionized mathematics and commerce. Leonardo of Pisa, or Fibonacci, introduced these ideas through his book Liber abbaci, having learned them in North Africa from Arab traders who, in turn, borrowed them from India. The key features were positional notation (where a digit’s value depends on its place) and zero as a placeholder and a number in its own right.

Before this, Europe used Roman numerals, which made multiplication and complex accounting incredibly difficult. While similar concepts of zero existed in ancient Sumer, Babylon, and even the Mayan civilization (which later collapsed, taking the idea with it), it was Brahmagupta in India (628 AD) who first treated zero as a proper number with its own rules for arithmetic. Fibonacci’s genius was to popularize this “extraordinarily important” innovation for merchants through practical examples, showing its commercial utility. The political fragmentation of Europe (unlike centralized empires like the Ottoman or Ming) allowed these new ideas to spread rapidly, driven by commerce, ultimately displacing Roman numerals and laying the groundwork for modern finance.

The Water Trap: Revolutionizing Sanitation

The seemingly simple S-bend or U-bend in plumbing pipes, which traps water to prevent sewage odors from entering buildings, is hailed as a “gorgeously simple and exquisitely clever” innovation that revolutionized sanitation. Before its invention, even early flush toilets (like Sir John Harington’s in 1596) failed to eliminate the smell, making chamber pots more appealing.

The S-bend was patented in 1775 by Alexander Cumming, a clockmaker and mathematician. His design, while containing the crucial water trap, included a problematic sliding valve. Three years later, Joseph Bramah, a skilled craftsman and inventor, improved the water closet with a hinged flap and brought high standards of craftsmanship, leading to its commercial success among the wealthy. The indoor water closet only truly took off in the late 19th century with the construction of vast sewer systems, like London’s. Thomas Crapper, a Yorkshire plumber, further refined the water trap and siphon system, making WCs reliable, simple, and affordable, to the point his name became synonymous with the toilet. This demonstrates how incremental refinements of low-tech inventions, combined with infrastructure development, can lead to widespread societal transformation.

Crinkly Tin Conquers the Empire: The Ubiquitous Corrugated Iron

Corrugated iron, often overlooked for its ordinariness, is presented as an unlikely but profoundly beneficial innovation. Invented in 1829 by Henry Robinson Palmer, a trained engineer, it involves passing wrought iron sheets through rollers to create a sinusoidal wave. This process immensely strengthened the iron, making it rigid and capable of spanning wide gaps without much support, ideal for roofs and other building parts. Palmer’s patent went unchallenged, and the industry grew rapidly, especially after the introduction of galvanization (protecting iron with zinc) within ten years.

Corrugated iron became indispensable in the British Empire, particularly in Australia during the 1850s gold rush, where its resistance to termites and fire, light weight, and prefabricated nature (in a labor-scarce country) made it perfect for quickly erecting towns. It was shipped globally, used for everything from churches to military blockhouses. Ridley argues that this simple innovation has sheltered countless millions from the elements, provided affordable housing in slums, and likely saved many forests by reducing timber demand. Its enduring utility, despite its “ugliness,” highlights the impact of simple, robust, and cost-effective innovations.

The Container That Changed Trade: Malcom McLean’s Organizational Genius

In the mid-1950s, shipping goods by sea was slow, expensive, and inefficient, with port costs often exceeding ocean voyage costs. The containerization revolution, driven by Malcom McLean, transformed global trade. While the idea of standardized containers existed, previous attempts had failed due to size and logistics issues. McLean, a trucking entrepreneur, conceived the idea of lifting trailer bodies off their wheels and stacking them on ships, rather than loading whole trailers.

In 1956, his converted oil tanker, the SS Ideal X, set sail with 58 containers, demonstrating a 94% cost reduction compared to traditional break-bulk cargo. McLean and his engineer, Keith Tantlinger, systematically redesigned everything, from containers to cranes, for efficiency. McLean faced immense human obstacles, primarily union resistance and legal battles, which nearly bankrupted him. However, his persistence, coupled with opportunities like the Vietnam War (where his container ships revolutionized military logistics), led to the industry’s eventual adoption of standardized containers. McLean sold his company, Sea-Land, in 1970, and though his later ventures failed, his legacy is the vast, efficient global container trade that is vital to the world economy today. This innovation demonstrates that organizational ingenuity and relentless entrepreneurship, rather than new technology, can drive profound economic change.

Was Wheeled Baggage Late? The Rollaboard’s Success

The seemingly simple wheeled suitcase is considered a pinnacle of civilization, yet it appeared surprisingly late, after humans landed on the moon. Ridley questions why this “low-tech” innovation didn’t emerge sooner, despite various earlier patents for wheeled luggage dating back to the 1920s.

Bernard Sadow, a baggage company executive, patented rolling luggage in 1972 after a frustrating experience at American customs. However, retailers initially rejected his idea due to objections about added weight, cost, and the perceived redundancy given the availability of porters and baggage trolleys. The primary reason for its late adoption was the architecture of stations and airports (short concourses, numerous staircases, proximity of drop-off points), which made wheels less practical.

The rapid expansion of air travel in the 1970s and the increasing distances passengers had to walk created a tipping point. A decade later, Robert Plath, a pilot, improved on Sadow’s design by inventing the Rollaboard (1987), attaching two wheels to one side and adding a telescopic handle, allowing the case to be tilted and pulled upright. Plath’s design became the industry standard. This story illustrates that innovation often requires market readiness and infrastructure changes before it can gain widespread traction, and that even simple ideas often undergo incremental improvements to optimize their utility.

Novelty at the Table: Innovation in Cuisine

The restaurant industry is presented as a dynamic sector addicted to “permissionless innovation,” where constant novelty drives rapid turnover. Over the past half-century, much innovation has come from importing diverse foreign cooking styles, but as this method reaches its limits, the industry turns to other forms of creativity.

Ridley highlights “recombination” as a principal source of culinary innovation, where old ingredients or techniques are brought together in new combinations. René Redzepi’s Noma, for example, achieved fame by paradoxically re-creating ancient hunter-gatherer localism. High-end restaurants like El Bulli invest in R&D facilities where chefs and food scientists develop new recipes through “feed-forward trial and modification,” similar to Edison’s approach to the light bulb. Beyond ingredients and recipes, innovation also transforms eating methods. Ray Kroc’s McDonald’s success lay not in inventing the hamburger, but in standardizing its preparation and rolling out a franchise model focused on uniformity and affordability, making fast food accessible and consistent on a global scale. This demonstrates how commercialization and operational efficiency can be as revolutionary as culinary invention.

The Rise of the Sharing Economy: Connectivity Enabling Old Ideas

The sharing economy, exemplified by eBay, Uber, and Airbnb, is characterized as “low-tech” in concept but revolutionary due to internet connectivity. These innovations apply simple, old ideas—like people with spare assets (rooms, cars) connecting with those who need them—on a vast scale. Airbnb, founded in 2008, has surpassed five million listings, unlocking immense latent value in people’s homes and offering more affordable lodging options.

Ridley notes that while such innovations bring problems (e.g., impact on local housing), they also fulfill a clear need. The sharing economy represents a form of “more from less,” increasing economic enrichment by using existing resources more frugally (e.g., cars sitting idle 95% of the time). Other examples include VIPKid (connecting Chinese students with English teachers online) and Hipcamp (landowners renting space to campers). This demonstrates how digital platforms facilitate efficient exchange and specialization, proving that old economic principles can be powerfully revitalized by new technological infrastructure.

Chapter 6: Communication and Computing

This chapter delves into the interconnected histories of communication and computing, revealing that progress in these fields is characterized by incremental evolution, surprising unpredictability, and collaborative effort, rather than single breakthroughs.

The First Death of Distance: Telegraph and Telephone

The electric telegraph dramatically shrunk the world by enabling messages to travel in seconds rather than months. The idea emerged in 1832 on a ship, when Samuel Morse was inspired by a conversation about electromagnetism. While Morse is credited, he faced rival claims from European inventors like Charles Wheatstone and William Cooke, and his work relied on others’ insights (e.g., Leonard Gale’s relays). Morse’s true achievement was his “dogged entrepreneurship” in battling political indifference, mechanical failures, and lawsuits to bring the telegraph to market, notably with the Washington to Baltimore line (1844).

The telegraph’s success led to a global network of wires and undersea cables, profoundly impacting commerce, politics, and social life. The telephone followed inevitably, with Alexander Graham Bell narrowly beating Elisha Gray to the patent in 1876. However, both were preceded by Antonio Meucci, an Italian immigrant whose lack of funding and legal defense meant his early work was forgotten. This highlights how commercialization and legal protection are as crucial as invention for an innovation to leave a lasting mark.

The Miracle of Wireless: Marconi and Radio

Guglielmo Marconi is presented as an unusual innovator: aristocratic, skilled in both invention and business, and directly applying scientific discoveries. Inspired by Heinrich Hertz’s demonstration of electromagnetic waves in 1888, Marconi began experimenting with wireless telegraphy. By 1895, he had sent signals across his family villa’s hillside. He promptly moved to London to secure patents, aided by well-connected family and lawyers.

While Marconi was not the sole inventor (others like Jagadish Chandra Bose, Oliver Lodge, and Alexander Popov were conducting similar experiments), his success lay in his commercial acumen and ability to integrate existing devices and ideas into a practical system. His persistent demonstrations, including transatlantic transmissions, secured his fame. However, he spent years embroiled in legal battles over patents with rivals like Reginald Fessenden and Lee de Forest. Ridley notes that radio broadcasting, as opposed to point-to-point communication, was a later realization for Marconi, who observed its power used by the Vatican and later, chillingly, by the Nazis. The story illustrates how innovation is a collective effort, patents can create disputes, and technologies can have unforeseen societal impacts, both positive and negative.

Who Invented the Computer? A Confluence of Ideas

Ridley argues there is no single inventor of the computer, but rather a “regiment of people” who made incremental, cross-fertilized contributions. The ENIAC (1945), a 30-ton, house-sized machine at the University of Pennsylvania, is often cited as the first general-purpose electronic computer, but it was decimal, not binary, and its patent was later challenged for borrowing ideas from John Vincent Atanasoff’s uncompleted machine.

The Colossus (1943) at Bletchley Park, designed by Tommy Flowers with input from Alan Turing, was earlier, fully electronic, binary, and programmable, but not general-purpose and its story was secret. Alan Turing’s 1937 paper laid the theoretical groundwork for a universal computer. Claude Shannon’s 1937 MIT thesis showed how Boolean algebra could be embodied in electrical circuits. Johnny von Neumann’s 1945 “First Draft” outlined the modern stored-program architecture, drawing on ideas from Howard Aiken’s Mark 1 at Harvard (a non-electronic programmable computer). Crucially, Grace Hopper, Aiken’s deputy, pioneered software concepts like subroutines and compilers, arguably making her the “mother of the software industry.”

Even earlier, Charles Babbage’s Analytical Engine (never completed) and Ada Byron, Countess of Lovelace’s notes on software and programs in the 1840s prefigured modern computing. The Jacquard loom (using punched cards) provided a practical precedent for programming. Ridley concludes that computing was an inevitable outcome of converging technologies and ideas, with no single “eureka moment.” The wartime acceleration of computing is debatable, as innovations were already underway and might have progressed faster without secrecy.

The Ever-Shrinking Transistor: Gordon Moore’s Law

The extraordinary evolution of computing from 1950 to 2000 is epitomized by Gordon Moore’s Law, which describes the exponential increase in the number of transistors on a chip, doubling approximately every two years. Moore, a quiet and unassuming figure, co-founded Intel and observed this trend in 1965. The unique feature of miniaturizing transistors is that smaller transistors use less power, generate less heat, and switch faster, creating a virtuous circle where cheaper, faster chips find more uses.

Moore’s Law persisted for about 50 years, surprisingly unaffected by external events like wars or recessions. Ridley highlights that this was not a sudden breakthrough but a steady, incremental progression building on earlier technologies like vacuum tubes. Intel’s “tick-tock” strategy of alternating between new chip releases and design fine-tuning exemplified this continuous improvement. The industry’s concentration in Silicon Valley was due to its egalitarian, open corporate culture and rapid cross-pollination of ideas. Ridley emphasizes that individual figures like Steve Jobs or Jeff Bezos were products of this technological advance as much as its causes, and that the industry’s rapid innovation has consistently defied pessimistic predictions about its limits or impact.

The Surprise of Search Engines and Social Media

The rapid and vast growth of search engines and social media took the world by surprise, despite their underlying inevitability. Ridley notes that nobody accurately predicted their scale or impact beforehand. The first recognizable search engine, Archie, emerged in 1990 from McGill University, but its creators never commercialized it. Early search engines like Webcrawler, Lycos, Altavista, Excite, and Yahoo! followed, all working on text-crawling bots and indexing.

Google, founded by Larry Page and Sergey Brin in 1998, revolutionized search by using a “PageRank” algorithm that weighted websites based on the number and quality of inbound links, effectively leveraging collective human judgment. Initially, Page didn’t even aim to build a search engine. Google’s eventual decision to integrate advertising as its core revenue stream made it immensely profitable.

Social media, exemplified by Facebook (launched 2004), similarly defied predictions that the internet would make people antisocial. Instead, it enabled widespread social engagement, though with unforeseen negative consequences like political polarization and filter bubbles. Ridley cites Eli Pariser’s analysis of Google’s personalized search results and Facebook’s “like” button as key moments when algorithms began to confirm user biases. While early internet visionaries hoped for a utopian, flat world, social media, like early radio, proved to be a polarizing force, demonstrating how innovation often takes unexpected and complex societal directions.

Machines That Learn: The Evolution of Artificial Intelligence

Artificial intelligence (AI), though one of the oldest concepts in computing, has a history of repeated failure to deliver on its hype, leading to “AI winters.” Ridley notes that when computers achieve clever feats, we often redefine the task as non-intelligent, as seen with Deep Blue’s chess victory. The turning point came with DeepMind’s AlphaGo defeating the world Go champion in 2016, a success driven not by brute force but by the program’s ability to learn through neural networks, often making moves human experts couldn’t explain.

This shift in AI focuses from “expert systems” to “learning approaches” was enabled by three factors: new software (Geoffrey Hinton’s “back propagation” in neural networks), new hardware (the Graphics Processing Unit or GPU), and new data (the exponential growth of information). GPUs, initially developed for the computer gaming industry by companies like Nvidia, proved invaluable for deep learning. Ridley cautions that while AI breakthroughs are significant, challenges remain in ensuring trustworthiness and explainability. He predicts that AI will primarily augment rather than replace humans, leading to “centaur” collaborations (algorithms and people working together), and that the true revolution will involve making technologies trustworthy through transparent processes.

Chapter 7: Prehistoric Innovation

This chapter pushes the timeline of innovation back into prehistory, arguing that the same fundamental principles—gradual evolution, the role of energy, and the impact of social exchange—apply to humanity’s most ancient and foundational advancements.

The First Farmers: Agriculture as a Gradual Transformation

The adoption of farming, roughly 10,000 years ago, was one of the most momentous innovations, transforming humans from sparse hunter-gatherers into dense, landscape-altering populations. This “agricultural revolution” had implications as vast as the Industrial Revolution, enabling the rise of new social structures and cultural innovations. Ridley emphasizes that agriculture was a simultaneous invention, emerging independently in at least six different regions worldwide (Near East, China, Africa, South America, North/Central America, New Guinea) within a few millennia.

This synchronicity suggests that the changing climate conditions at the end of the last ice age were crucial. The Holocene era (our current interglacial period) brought warmer, wetter, and more stable conditions with higher carbon dioxide levels, making plant growth significantly easier and more productive. Prior Pleistocene volatility would have made farming impossible. Ridley argues the shift was not sudden but a gradual process, starting as gardening, where humans subtly encouraged favored plants, leading to a virtuous circle of natural selection. This innovation, like later ones, often began in regions of plenty (fertile river valleys) rather than from desperation. Farming also led to co-evolution with human genetics, exemplified by the selection for lactose tolerance in populations that adopted dairy farming.

The Invention of the Dog: Domestication as Co-Evolution

The domestication of the dog represents a crucial innovation that preceded agriculture, transforming human fortunes. Genetic evidence suggests dogs diverged from Eurasian wolves between 20,000 and 40,000 years ago, indicating a single domestication event. Ridley posits that this likely began with wolves tentatively scavenging around human camps, with bolder individuals gaining food rewards, leading to a gradual selection for tameness.

Ridley introduces Dmitri Belyaev’s Siberian fox experiment from the 1960s, which demonstrated that selecting for docility also produced physical changes (curly tails, floppy ears, white patches) and behavioral shifts (breeding at younger ages), a “domestication syndrome.” This syndrome is linked to delayed migration of neural crest cells during development. Richard Wrangham extends this idea, hypothesizing that humans themselves are a self-domesticated species, selected for reduced reactive aggression to strangers, enabling denser, more collaborative societies. The shrinkage of human brains over the last 20,000 years, similar to brain shrinkage in domesticated animals, further supports this idea. Thus, the dog’s invention was not just a technological tool but a co-evolutionary process that changed both species.

The (Stone Age) Great Leap Forward: Collective Brains and Density

Before the “human revolution” (later Stone Age), hominid toolmaking, like the Acheulean hand-axe, remained remarkably unchanged for hundreds of thousands of years, suggesting innate, rather than innovative, behaviors. Ridley argues this began to shift around 160,000 years ago in Africa, with the emergence of new, complex toolkits and techniques like heat-treating stone. The “human revolution” in Europe around 45,000 years ago was actually “catch-up growth,” as Africa had seen these innovations much earlier.

Curtis Marean’s work at Pinnacle Point in South Africa reveals evidence of sophisticated human behavior (microliths, varied tools, fire use) dating back 160,000 years. Marean hypothesizes that coastal foraging, offering rich, predictable, and persistent food resources, allowed early humans to settle in denser, more sedentary populations. This increased population density fostered a “collective brain,” enabling specialization and a division of labor, incentivizing the invention of complex tools like the spear thrower or bow for defense. This suggests innovation thrives in wealthy, dense, and well-connected populations that can support specialized roles. The concept is reinforced by Tasmania’s “disinnovation,” where isolation and a small population led to the loss of technologies over millennia, underscoring the crucial link between population size, trade, and the development of novelty.

The Feast Made Possible by Fire: Cooking and Human Evolution

The invention of fire and cooking was a profound innovation that fundamentally changed human anatomy and social behavior. Ridley argues that humans are biologically adapted to cooked food, unlike other primates. Our smaller teeth, stomachs, and guts (which burn 10% less energy than other apes’ digestive systems) indicate a long history of consuming pre-digested food. Cooking gelatinizes starch and denatures proteins, dramatically increasing the digestible energy from food and freeing up energy for brain growth. The expansion of the human brain around two million years ago, with the emergence of Homo erectus, is linked to this shift.

While the exact “when and how” of fire control is debated, it likely involved observing natural fires and gradually learning to harness embers or intentionally spread fires. The consistent use of fire for cooking, even in the simplest hunter-gatherer societies, suggests it was a foundational practice. By tapping into the energy stored in wood through combustion, humans accessed a new energy source previously available only to decomposers. This was an “energy transition” as significant as the later adoption of fossil fuels, enabling the development of larger brains and more complex behaviors.

The Ultimate Innovation: Life Itself

Ridley posits that the beginning of life on Earth four billion years ago was the “first innovation,” a prime example of atoms and bytes rearranging into improbable, energy-hharnessing forms. He highlights that all life forms share a fundamental, idiosyncratic mechanism for trapping and using energy: pumping protons across lipid membranes to create energy gradients. This arbitrary, yet universal, method of energy capture points to a single origin for all life.

Nick Lane’s research suggests that life may have begun in alkaline, warm-water vents on the ocean floor (like the Lost City). In these environments, natural proton gradients across thin mineral walls could have driven the synthesis of organic molecules, leading to molecular complexity. The arbitrary nature of the genetic code, shared across all life forms, further supports a single origin. Ridley concludes that human innovation is merely a continuation of this fundamental process of defying entropy by creating improbable, useful structures, a trajectory that began at the dawn of life itself.

Chapter 8: Innovation’s Essentials

This chapter distills recurring patterns from the history of innovation, arguing that successful innovation is inherently gradual, collaborative, often serendipitous, and thrives in environments of freedom and trial-and-error.

Innovation is Gradual

Ridley firmly asserts that innovation is almost always a gradual, not a sudden, phenomenon. He debunks the myth of “eureka moments,” arguing they are either rare or retrospective myths, preceded by extensive preparation and numerous false starts. For instance, the computer’s evolution can be traced through myriad small, incremental steps, from Jacquard looms to vacuum tubes, rather than a single invention. The Wright brothers’ first flight, while iconic, was merely a few seconds’ hop, the result of years of systematic experimentation and learning from failures, culminating in a series of gradual improvements in control and duration.

This gradual, evolutionary process applies to almost every innovation, including the double helix discovery and oral rehydration therapy. Ridley attributes the pervasive myth of sudden breakthroughs to human nature (desire for decisive agency, heroic narratives) and the intellectual property system (which incentivizes inventors to claim priority and magnify their contribution). He notes that nationalism further distorts history by conflating adoption with invention. Ultimately, the existence of patents, by creating artificial monopolies and encouraging litigation, often hinders broad, rapid innovation.

Innovation is Different from Invention

Ridley stresses the crucial distinction: innovation is the process of turning an invention into a practical, affordable, and widely adopted solution, which is often far more challenging and impactful than the initial invention. He uses the Haber-Bosch process as a prime example: Fritz Haber’s scientific discovery of nitrogen fixation was a great invention, but Carl Bosch’s years of relentless engineering and problem-solving were what made ammonia production scalable and affordable.

This principle, which Ridley terms the “toilet-paper principle” (after its humble but vital utility), means that affordability and simplification often matter more than beguiling complexity. The unexpected success of mobile telephony stemmed not from a single technological breakthrough, but from its dramatic fall in price. Ridley echoes Joseph Schumpeter’s observation that capitalism’s true achievement is not in providing more luxuries for the rich, but in making essential goods and services accessible to factory workers for decreasing amounts of effort.

Innovation is Often Serendipitous

Serendipity, defined as accidental discovery combined with sagacity, is a well-known and consistent attribute of innovation. Ridley provides numerous examples:

  • The founders of Yahoo! and Google didn’t set out to build search engines.
  • Roy Plunkett accidentally invented Teflon while trying to develop improved refrigerants.
  • Stephanie Kwolek serendipitously developed Kevlar while working on polymers.
  • Spencer Silver at 3M found a weak, temporary adhesive, which later became the Post-it note when a colleague, Art Fry, found a use for it.
  • Alec Jeffreys discovered genetic fingerprinting while trying to spot gene mutations for disease diagnosis, leading to its unexpected widespread use in forensics and paternity.

These stories illustrate that many crucial innovations arise from unintended observations and the ability to recognize their potential, often when pursuing a different goal. They highlight that innovation is not always a linear or planned process, but often a byproduct of open-ended exploration and a willingness to investigate anomalies.

Innovation is Recombinant

Every successful innovation is essentially a recombination of existing technologies and ideas. Ridley argues that just as biological evolution relies on recombination (like sexual reproduction) to create new variations, so too does human innovation. Technologies rarely appear from scratch; instead, they build upon and combine elements from previous inventions. The motor car, for example, did not invent wheels, springs, or steel; it combined them. Modern computers integrated vacuum tubes and storable program concepts from earlier machines.

This recombinant nature explains why innovation thrives where people and ideas can meet and exchange freely. Dense, trading cities (like Renaissance Italian city-states or Silicon Valley today) disproportionately foster innovation because they facilitate the “sex of ideas.” Ridley cites Andreas Wagner’s argument that sudden shifts of “whole chunks” of DNA (hybridization) are necessary for biological organisms to leap across disadvantageous “valleys” to new peaks of advantage, just as human innovation borrows “whole, working parts” from other technologies. This view challenges the primacy of individual “mutation-like” breakthroughs and highlights the cumulative, interconnected nature of progress.

Innovation Involves Trial and Error

Tolerance of error and persistent experimentation are critical to innovation. Ridley emphasizes that most inventors and innovators must “just keep trying,” with failures being an inherent part of the learning process. Thomas Edison’s famous quote, “I’ve not failed, I’ve just found 10,000 ways that won’t work,” exemplifies this. He and his team tested 6,000 materials for the light bulb filament. Similarly, Henry Booth helped George Stephenson improve the Rocket through trial and error, and Keith Tantlinger refined container fitting for Malcom McLean through endless experiments.

Ridley cites Dick Fosbury’s invention of the “Fosbury flop” in high jump as a trivial but telling example of a technique evolving through months of trial and error, not scientific design. Edward Wasserman’s research on the evolution of the violin’s design also suggests a process akin to natural selection. The principle of “fail fast and fail often” (common in Silicon Valley) allows innovators to learn rapidly. Ridley argues that America’s relatively benign attitude towards business failure, enabling entrepreneurs to recover and try again, is a significant advantage. This underscores that innovation is an empirical process where practical knowledge is gained through repeated attempts, often involving “useful” failures.

Innovation is a Team Sport

The myth of the solitary genius inventor is pervasive but misleading; innovation is always a collaborative enterprise. Ridley uses the “I, Pencil” analogy to illustrate that even simple objects require a vast network of specialized individuals, none of whom possess all the knowledge to create the product themselves. This distributed knowledge means that innovation inherently requires sharing and collaboration.

The Green Revolution in agriculture, for instance, was driven by Norman Borlaug’s dedication, but he built on ideas from a chain of breeders (Bayles, Vogel, Salmon, Inazuka) and collaborated with people like Bajwa and Swaminathan. Ridley highlights the flowering of scientific societies, clubs, and mechanics’ institutes during Britain’s Industrial Revolution as evidence of collective R&D. The principle extends to animal behavior, where larger groups of Australian magpies solve problems faster. Isolation, as seen with the Tasmanians, stifles innovation by limiting the “effective population size” and the exchange of ideas. This emphasizes that innovation is a network phenomenon where ideas “meet and mate” between, rather than within, individual brains, thriving where trade and exchange are frequent.

Innovation is Inexorable

The phenomenon of simultaneous invention—where multiple individuals independently stumble upon the same idea around the same time—suggests that many technological ideas are “ripe” for discovery, making innovation almost inexorable. Ridley lists numerous examples: six thermometers, five electric telegraphs, four decimal fractions, three hypodermic needles, and two natural selections. The most striking is the electric light bulb, independently invented by 21 people.

This implies that the individual inventor is strangely “dispensable” in the long run; if one person fails, another will likely achieve the same breakthrough soon after. The paradox is that while the individual might be dispensable to the ultimate invention, their success in winning the “race” makes them extraordinary in the short run. A second paradox is that despite this inevitability, innovation remains highly unpredictable in prospect, even to experts. Predictions often swing wildly between overly optimistic and overly pessimistic, demonstrating that anticipating human desires and the precise means of satisfying them is incredibly difficult.

Innovation’s Hype Cycle: Amara’s Law

Ridley introduces Roy Amara’s Law, stating that people tend to overestimate the impact of a new technology in the short run but underestimate it in the long run. This explains why technologies often experience initial “hype cycles” followed by periods of disappointment before their true, long-term impact becomes clear. Examples include the internet (dot-com bust followed by radical disruption a decade later), GPS (initial military failures before becoming indispensable), and the human genome project (initial grand promises followed by a backlash, and then a gradual realization of its medical impact).

Ridley applies this to artificial intelligence (AI), suggesting that while it has had past “AI winters,” it might now be on the cusp of fulfilling its promise. Conversely, he suggests blockchain is currently in the early hype stage, likely to face disappointment in the short term before its long-term potential. He also predicts that self-driving cars are being significantly overestimated in the short term due to the immense practical, regulatory, and infrastructural challenges, and will likely take longer to become widespread than commonly believed. This pattern reflects the lengthy process of turning inventions into practical, reliable, and affordable innovations.

Innovation Prefers Fragmented Governance

Ridley argues that empires and centralized governments are generally bad at innovation, despite their wealth and educated elites. Instead, fragmented governance, particularly in city-states, tends to foster inventiveness. Examples include:

  • The Italian Renaissance city-states (Genoa, Florence, Venice) driving innovation through merchant-led, competitive environments.
  • Ancient Greece’s fragmented city-states.
  • Europe’s rapid adoption of printing in the 1400s, facilitated by its political fragmentation, which allowed innovators like Johann Gutenberg and Martin Luther to find supportive regimes. In contrast, the Ottoman and Mughal empires banned printing for centuries, due to alliances between calligraphers and priests who successfully lobbied imperial authorities.
  • China’s bursts of innovation coincided with decentralized “warring states” periods, while strong empires like the Ming stifled it.

The United States, with its federal structure allowing state-level experimentation, is an apparent exception that proves the rule, though Ridley notes growing federal strength may be hindering recent dynamism. He cites Geoffrey West’s finding that cities scale with superlinear rates of innovation and wages (more patents, universities per capita) compared to their sublinear infrastructure growth, explaining why cities thrive while large companies often falter and die. Ultimately, fragmentation fosters innovation by allowing competition among jurisdictions and preventing monopolies on ideas, whereas centralized power tends to ossify and resist novelty.

Innovation Increasingly Means Using Fewer Resources Rather Than More

Contrary to fears about indefinite growth hitting resource limits, Ridley argues that economic growth is increasingly “dematerializing”, meaning it involves doing more with less. This is driven by innovation, which enables greater efficiency in resource use. He cites Jesse Ausubel’s discovery that the American economy uses significantly less steel, aluminum, and copper than at its peaks, despite a larger population and higher output. Farms produce more food with less fertilizer and water, and energy systems generate fewer emissions per kilowatt-hour.

This debunks the “Jevons paradox” in many sectors, where energy savings used to lead to increased overall energy consumption. For example, LEDs use less than 25% of the electricity of incandescent bulbs for the same light, meaning you’d have to use them over ten times longer to consume more power—an unlikely scenario. Ridley, drawing on Andrew McAfee’s “More from Less,” asserts that this dematerialization is why pessimistic predictions of resource depletion have been spectacularly wrong. It signifies that growth is indefinitely sustainable by continuously lowering the amount of resources needed per unit of output, proving that economic enrichment doesn’t have to come at the expense of the planet.

Chapter 9: The Economics of Innovation

This chapter delves into the economic theories surrounding innovation, highlighting how it has often been a “curious hole” in conventional models and asserting its fundamental role as a bottom-up, market-driven phenomenon.

The Puzzle of Increasing Returns

Ridley identifies a fundamental contradiction at the heart of economic theory, originating with Adam Smith: the “invisible hand” implies diminishing returns (markets reaching equilibrium), while the “division of labor” implies increasing returns (specialization leading to ever-decreasing costs). Subsequent economists, including Ricardo, Mill, and Keynes, largely focused on diminishing returns, assuming innovation was an external factor that would eventually fade.

Joseph Schumpeter was a notable exception, arguing in 1942 that innovation, driven by “creative destruction,” offered potentially infinite increasing returns, a view that proved prescient despite being unfashionable. Robert Solow later highlighted innovation as the unexplained “residual” (85%) of economic growth. Early theories by Nelson and Arrow posited that government funding of research was necessary because the private sector wouldn’t generate knowledge due to its “non-rival” nature (easy to copy). However, Ridley counters that this ignores significant privately funded research and the fact that innovation has thrived historically without large government budgets. The eventual Nobel laureate Paul Romer introduced the concept that new knowledge is “non-rival” (shareable) but also “partially excludable” (temporarily profitable), allowing innovators to make money before ideas are widely copied, thus making innovation an “endogenous” factor of growth.

Innovation is a Bottom-Up Phenomenon

Ridley strongly refutes the “creationist” view that innovation is primarily a product of intelligent design by government, as argued by Mariana Mazzucato in “The Entrepreneurial State.” He contends that innovation has historically been a bottom-up, decentralized process:

  • The Industrial Revolution in Britain, with its railways, steel, and textiles, owed little to government direction; it was a private-sector phenomenon.
  • America’s early 20th-century innovation occurred without significant public R&D until 1940.
  • Government-funded projects, like Samuel Langley’s failed plane or Britain’s R101 airship, often highlight inefficiency compared to private ventures.
  • While government has funded R&D since WWII, Ridley argues this is often “spillover” (e.g., DARPA funding leading to the internet), not intentional direction, and that government spending can “crowd out” private research.

He cites an OECD study finding that privately funded R&D correlated with economic growth, but publicly funded R&D did not. The success of Japanese innovation before 1991, with less than 20% government R&D funding, further supports this. Ridley concludes that while governments can stimulate innovation, they are not its main actor. The fact that government itself is typically un-innovative (e.g., Parliament unchanged for centuries) further suggests that innovation generally flows from market dynamics rather than top-down mandates.

Innovation is the Mother of Science as Often as It Is the Daughter

Contrary to the common “linear model” (science leads to technology leads to innovation), Ridley argues that innovation (technology/practice) is often the parent of science (understanding). Examples include:

  • Steam engines leading to the understanding of thermodynamics.
  • Powered flight preceding most aerodynamics.
  • Animal and plant breeding preceding genetics.
  • Vaccination being widely practiced for centuries before Pasteur explained germ theory.
  • The understanding of antibiotics coming long after their practical use.

He notes that Adam Smith observed innovation primarily deriving from “common workmen” rather than academic philosophers. While modern research shows some connection between academic science and patents/drugs, Ridley asserts it’s often a reciprocal relationship, not a linear one. The discovery of DNA structure, for instance, owed much to X-ray crystallography, a technique developed partly for the textile industry. Similarly, CRISPR gene editing had roots in solving practical problems in the yogurt industry. Ridley emphasizes that the “linear model,” though popular among politicians to justify science funding, misreads history and devalues science as an end in itself.

Innovation Cannot Be Forced Upon Unwilling Consumers

Innovation, Ridley argues, is not necessarily a good thing; it can be harmful (poison gas) or useless (manned space travel’s limited economic benefit). To succeed without subsidy, an innovation must prove useful to individuals and save time, energy, or money. Technologies that are too costly or offer no clear extra benefit will not thrive, no matter how ingenious.

Ridley asserts that innovation cannot be imposed upon unwilling consumers. The ultimate test for an innovation is its adoption by free consumers who are willing to reward it. This implies a market-driven process where products must genuinely satisfy desires and demonstrate practical value, rather than being forced by central planning or government mandate. This concept underpins his argument for the importance of consumer freedom in driving the success or failure of new products.

Innovation Increases Interdependence

A central theme of human history, according to Ridley, is the increasing specialization of production coupled with the increasing diversification of consumption. Innovation has driven this shift, moving humans from precarious self-sufficiency to safer mutual interdependence. Individuals specialize in a “job” to produce a narrow range of goods/services, using their earnings to consume a vast array produced by others. This contrasts with animals, which consume only what they produce.

Innovation enables this, for example, by making it possible to earn enough in a fraction of a second to afford electricity for an hour, a task that would have taken a whole day in the past. Ridley argues that the internet, far from making people antisocial or self-sufficient, has deepened this interdependence by bringing services (like travel agents or secretarial tools) to everyone and enabling massive social engagement. He points out that today, economic inequality is more about luxuries than necessities, as innovation has made basic goods and services accessible to almost everyone, raising living standards for all by increasing productivity and purchasing power.

Innovation Does Not Create Unemployment

The fear that innovation destroys jobs is a long-standing concern, dating back to the Luddites and Captain Swing in the 19th century. Ridley argues that, historically, this fear has consistently proved wrong. While jobs are displaced (e.g., farm workers moving to manufacturing, manufacturing workers to services), innovation concurrently creates new jobs and new industries, and the overall employment rate often rises. He cites the example of bank tellers: despite ATMs, there are more tellers today, with more interesting jobs.

He acknowledges current fears about AI and automation eliminating jobs, with some studies predicting significant job displacement (e.g., Frey and Osborne’s 47% figure). However, he points to OECD findings of much lower risk (9%) and notes that such predictions have always been wrong. Innovation, by increasing productivity, makes goods and services cheaper, leading to increased demand and new types of work. It also leads to more leisure time, which is equitably distributed across society (e.g., shorter work weeks, longer retirement). Ridley concludes that work is not an end in itself, and that fears of mass unemployment from automation are largely unfounded, as human desires and new tasks will always emerge.

Big Companies Are Bad at Innovation

Ridley argues that innovation often comes from outsiders and startups, not large, established companies. Big firms are frequently “blindsided” by more agile disruptors, exemplified by Kodak failing to capitalize on digital photography despite inventing it, due to its vested interest in film. This is because large companies tend to be bureaucratic, risk-averse, and complacent, failing to adapt to customer interests or new technologies.

The one thing that spurs innovation in big companies is competition. Supermarkets, for instance, constantly innovate due to fierce rivalry. Ridley highlights Procter & Gamble’s shift to “open innovation,” seeking half its innovations from outside the firm, as a response to stagnating internal R&D. The ultimate form of open innovation is open-source software, like Linux, which has become dominant in supercomputing and mobile (Android), demonstrating that free sharing of ideas can encourage, not deter, innovation. Ridley also introduces the concept of “free innovation” by consumers themselves, who often develop and share product modifications for their own use, unhindered by profit motives or corporate caution, as seen with Nightscout for diabetics.

Setting Innovation Free: Addressing Obstacles

Ridley concludes this section by outlining the obstacles that stifle innovation, arguing that while everyone generally favors innovation, they often resist it in specific cases due to:

  • Rent-seeking opportunities and regulatory capture: Powerful incumbents often lobby governments to create rules (patents, occupational licenses, subsidies) that protect their monopolies and hinder new entrants.
  • Bureaucracy and caution: The rise of “compliance officers” and stringent regulations disproportionately impacts small businesses, deterring new entrants.
  • Geographic factors: High housing costs in innovative areas (like Silicon Valley) can drive talent away.
  • Public psychology: Human beings are inherently cautious of novelty, often leading to bans or severe restrictions based on exaggerated fears.

He uses the example of Paris’s taxi caps (stifling competition before Uber) and land-use planning as examples of regulations that hinder innovation. Ultimately, Ridley asserts that Western economies have accumulated too many barriers to innovation, often justified by the “precautionary principle” that stifles positive change. He argues that innovation thrives when it’s “permissionless,” allowing experimentation unless specifically prohibited, as seen with the internet’s early growth in the US. The chapter sets up the final argument that overcoming these entrenched resistances is crucial for future prosperity.

Chapter 10: Fakes, Frauds, Fads and Failures

This chapter explores the darker side of innovation, detailing how its promise can attract frauds, fakers, and faddists, and how failure is an inevitable, and often necessary, part of the innovation process itself.

Fake Bomb Detectors: The Danger of Gullibility

Ridley illustrates the dangers of fake innovation with the story of Wade Quattlebaum’s Quadro Tracker Positive Molecular Locator, a device that supposedly detected golf balls, then drugs and explosives, by using a free-swinging antenna like a dowsing rod. Despite its implausibility, some people were convinced due to self-deception and a desire to believe in simple solutions.

The scam escalated when Jim McCormick in Britain developed his “ADE 650” devices, selling thousands to governments, notably Iraq, which used them at checkpoints during sectarian violence. These non-functional devices gave false reassurance, likely contributing to many deaths. McCormick, who became wealthy from the fraud, still protested that his devices worked. This disturbing tale highlights how a thin veneer of “innovation” and a high price can exploit gullibility when people desperately want a problem solved, leading to devastating real-world consequences.

Phantom Games Consoles: The Pitfalls of Vapourware

The concept of “vapourware” is introduced through the story of Infinium Labs (later Phantom Entertainment) and its promise of a “revolutionary new gaming platform” for online, on-demand video games. Founded in 2002, the company made repeated announcements about its product launch, delaying it multiple times from 2003 to 2006, before finally abandoning it. The company’s executives were later accused of illegally boosting share prices with false announcements.

Ridley notes that this practice of announcing products before they are ready, while sometimes a benign “fake it till you make it” strategy (as with Thomas Edison), can also be a deliberate deception to deter competitors or raise funds. This highlights the fine line between entrepreneurial ambition and outright fraud in the innovation landscape, especially when technology is complex and difficult for outsiders to verify.

The Theranos Debacle: Hubris, Deception, and Failed Promises

The Theranos scandal is presented as a spectacular case of failed innovation driven by hubris, deception, and a misplaced faith in “fake it till you make it.” Elizabeth Holmes, a Stanford dropout, founded Theranos in 2003 with the ambitious goal of providing pain-free, low-cost blood tests from a tiny drop of blood using microneedles and a silicon chip. Her vision was to be the “iPod of healthcare,” inspired by Steve Jobs.

However, the core technology (the patch, then later a “nanotainer” cartridge and miniLab robot) never worked reliably, a fact concealed from investors, partners, and even many employees. A culture of fear and lawsuits against those who raised concerns (leading to the suicide of chief scientist Ian Gibbons) characterized the company. Despite the technological failures, Theranos became a darling of Silicon Valley, attracting billions in investment from prominent figures and deals with Walgreens and Safeway, largely on the strength of Holmes’s charisma and the board’s influential, but technologically ignorant, political figures (like George Shultz and Henry Kissinger).

Ridley highlights how investors and partners engaged in circular reasoning, assuming others had validated the technology. The company made false claims about military use and Johns Hopkins’s endorsement. This was a classic example of “noble-cause corruption” where a good cause (accessible healthcare) was used to justify fraudulent means. The deception only unraveled due to the persistence of investigative reporter John Carreyrou and whistleblowers. The Theranos debacle serves as a cautionary tale against unchecked hype and a lack of rigorous, independent verification in the pursuit of innovation, showing how a failed innovation can leave a “scorched earth” behind.

Failure Through Diminishing Returns to Innovation: Mobile Phones

Not all innovation failures are fraudulent; many stem from honest attempts that simply hit diminishing returns. The mobile phone market exemplifies this. After experiencing rapid innovation from the 1990s to the 2010s (shrinking handsets, new features, smartphones), innovation in this sector began to stall around 2017-2019. New features seemed marginally useful, and prices exorbitant.

Companies like Nokia, Motorola, and Blackberry illustrate how incumbents can fall “painfully to earth” despite massive R&D spending. Nokia, once dominant, failed to transition from hardware to software, sticking to cautious, slow assessment processes while companies like Apple rapidly innovated. Ridley notes that “innovation, more often than not, eats its own offspring.” The diminishing returns seen in features like foldable phones (e.g., Huawei Mate X, Samsung Galaxy Fold) suggest a limit to what consumers find useful from a pocket device. This illustrates that innovation cannot force new ideas on people unless there’s a genuine desire and perceived value, and that past success does not guarantee future relevance in a dynamic market.

A Future Failure: Hyperloop

Ridley uses Elon Musk’s Hyperloop concept as a potential “future failure,” arguing it embodies the common pitfalls of innovation hype. Musk proposed a vacuum-sealed tube system for high-speed travel (760 mph) on magnetic levitation, promising it would be safer, faster, cheaper, and more sustainable than high-speed rail. Ridley immediately points out that:

  • The concept is not new, with pneumatic tube transport ideas dating back to the 1800s and similar vacuum-train proposals in the 1900s.
  • Engineering challenges are immense: building hundreds of miles of vacuum-sealed tubes, maintaining the vacuum, flexible thermal expansion joints, and handling high pressures are costly and difficult.
  • Land acquisition for straight routes will be expensive.
  • Energy requirements are significant, as vacuum pumps need power, and braking requires more energy without air resistance.
  • Capacity limitations (transporting thousands of people per hour) would make logistics incredibly complex, akin to a busy airport.

While some problems might be solved, Ridley argues there’s no guarantee it will be cheaper or more reliable than existing transport. He suggests the hype around Hyperloop stems from a belief that “innovation can solve almost any problem,” even when core engineering and economic realities remain unaddressed. This illustrates the tendency to overestimate the short-term impact of new technologies, particularly those without a clear path to affordability and practicality compared to existing alternatives.

Failure as a Necessary Ingredient of Success: Amazon and Google

Ridley emphasizes that failure is a crucial, “necessary ingredient of success” in innovation, especially for companies like Amazon and Google. He contrasts the “wobbly” Millennium Bridge in London, which was temporarily closed due to an unforeseen design flaw but later successfully stabilized, with the need for systemic failure tolerance in business.

Jeff Bezos, Amazon’s founder, famously stated that Amazon’s success is a function of the number of experiments they conduct and their willingness to be wrong. He deliberately fostered a culture of “failure fast and fail often,” even when it led to massive losses (e.g., writing off $39M in unsold toys, numerous failed dot-com acquisitions). Bezos’s management style (small “two-pizza” teams, “reverse-veto” policy where one manager can pursue an idea even if others object) was designed to institutionalize experimentation and allow “pain-less” failure. This Darwinian process led to successes like Amazon Web Services (AWS), a massive cloud computing business that even rivals Google and Microsoft were slow to spot.

Google’s “X” division, dedicated to high-risk “moonshot” innovation, similarly embraces failure. While projects like Google Glass (a public and expensive misjudgment) fail, Astro Teller, head of X, celebrates these failures as crucial learning opportunities. This high appetite for failure, particularly prevalent in Silicon Valley (partly due to dual-share ownership structures allowing founders to take risks), is essential for generating truly disruptive innovation that might otherwise be stifled by corporate caution. The success of companies like Naspers (a conservative South African publisher that pivoted into tech investing and made a colossal bet on Tencent) further illustrates how embracing risk and learning from failures can lead to extraordinary rewards.

Chapter 11: Resistance to Innovation

This chapter explores the universal phenomenon of resistance to innovation, demonstrating how new ideas are consistently met with opposition from various sources, despite their potential benefits.

When Novelty is Subversive: The Case of Coffee

The history of coffee illustrates how innovation, even a simple beverage, can be deemed subversive and face fierce resistance. As coffee spread to Arabia, Turkey, and Europe from the 1500s to 1600s, it was repeatedly banned in Mecca and Constantinople. The primary reason for this opposition was that coffee houses were hubs of gossip and potential sedition, where people discussed political matters and criticized rulers.

Beyond political paranoia, vested interests also played a role. Wine makers in France and beer brewers in Germany opposed coffee as a competitor. They allied with the medical profession, commissioning pseudoscientific pamphlets that claimed coffee caused impotence and exhaustion. Sweden, in particular, attempted to ban coffee multiple times into the 20th century, even conducting an experiment (with convicted murderers drinking coffee vs. tea) that ironically showed coffee was harmless. This episode highlights how opposition to novelty often combines appeals to safety, self-interest from incumbents, and paranoia among the powerful, mirroring modern debates about new technologies. Ridley also briefly mentions the margarine wars, where the dairy industry launched smear campaigns against the butter substitute, echoing modern anti-biotech campaigns.

When Innovation is Demonized and Delayed: The Case of Biotechnology

The campaign against agricultural biotechnology (GMOs) in Europe serves as a modern example of demonization and delay tactics successfully stifling innovation, echoing the coffee and margarine wars but with more lasting impact. Initially, transgenic crops in the US faced little opposition, but in Europe, everything changed suddenly after 1996, when concerns about “mad cow disease” (BSE) coincided with the approval of GM soybeans, eroding public trust in government food safety assurances.

Greenpeace and Friends of the Earth exploited this public unease, using market research to launch highly effective campaigns, demonizing GM foods as “Frankenfood.” This led to a moratorium on new GM crops in the EU, which evolved into a burdensome regulatory approval system amounting to a de facto ban. The precautionary principle, as adopted by the EU, became a key tool, holding new technologies to an impossibly high standard while ignoring the risks of existing practices (e.g., allowing copper-based pesticides in organic farming despite their toxicity). This led to a “vicious circle”: activist demands for regulation made new crop development so expensive that only large companies could pursue it, which then fueled further anti-corporate activism.

Greenpeace also actively campaigned against humanitarian applications like Golden Rice (designed to combat Vitamin A deficiency), making shifting, unscientific claims about its safety and efficacy. This “shocking campaign” led 134 Nobel laureates to call for Greenpeace to cease its opposition, but to no avail. The result was Europe effectively abandoning GM crop research, driving innovation to other regions and creating a significant human cost in terms of preventable deaths and malnutrition.

When Scares Ignore Science: The Case of Weedkiller (Glyphosate)

The controversy surrounding the herbicide glyphosate (Roundup) demonstrates how scares can ignore scientific consensus to deter innovation and enable litigation. Glyphosate, invented in 1970, is a cheap, ubiquitous, and generally safer weedkiller than its predecessors, breaking down rapidly and primarily affecting plants. However, in 2015, the International Agency for Research on Cancer (IARC), a WHO body, classified glyphosate as “probably carcinogenic to humans” at very high doses, placing it in the same category as sausages and sawdust. This contradicted the findings of other major food safety authorities worldwide, which found no risk at normal doses.

Ridley points out that IARC’s conclusion was based on a biased review of evidence, with negative findings being deleted or altered, and that its lead advisor on the matter had financial ties to law firms suing Monsanto. This aligns with a common activist tactic: “manipulate public perception, create fear or outrage… find a corporate scapegoat and litigate the hell out of them.” The resulting litigation bonanza (e.g., against Monsanto in the US) discourages further research and development in crop protection products, even if they offer safer alternatives. This illustrates how ideological opposition can undermine scientific evidence to achieve political or financial goals, ultimately hindering beneficial innovation.

When Government Prevents Innovation: The Case of Mobile Telephony

Ridley argues that government, often in league with vested interests, can actively prevent innovation, even if it delays progress for decades. The history of mobile telephony in the US is a prime example of bureaucratic delay and regulatory capture. In 1945, a US FCC head optimistically predicted millions of “hand-held talkies” using a “cellular” concept. However, in 1947, the FCC rejected AT&T’s application for cellular service, prioritizing television spectrum and effectively blocking mobile wireless for over a generation. Broadcasters lobbied to protect their unused spectrum.

Existing “radio common carriers” (RCCs) and Motorola (with its monopoly on large handsets) also fiercely opposed cellular, fearing competition. AT&T, comfortable with its landline monopoly, had no incentive to disrupt itself. As late as 1980, AT&T forecast only 900,000 mobile handsets by 2000 (actual: 109 million). It took until 1982 for the FCC to finally accept cellular license applications, and the first service launched in 1984, 39 years after it was first conceived. Europe, initially slow due to nationalized industries, overtook the US by setting the GSM 2G standard, but its protectionism against Qualcomm’s CDMA (a data-focused standard) ultimately led to Europe falling behind in 3G. This story demonstrates how government regulation, influenced by incumbent lobbies and bureaucratic inertia, can stifle innovation for decades, with significant opportunity costs for society. Ridley also briefly notes similar regulatory hurdles for drones in the US, allowing China to dominate the industry due to less restrictive rules.

When the Law Stifles Innovation: The Case of Intellectual Property

Ridley argues that while intellectual property (IP)—patents and copyright—is justified as encouraging innovation, the evidence shows it often hinders progress and discourages it. The scope and strength of copyright have been steadily extended (e.g., life of author plus 70 years), with little evidence of a corresponding explosion in creative output. He notes that artists like Shakespeare thrived without copyright. In the music industry, “piracy” (mass file sharing) led to a steep fall in revenues but a doubling of new music albums, suggesting creators are driven by influence and fame as much as money.

Patents, intended to grant temporary monopolies for disclosure, are flawed because ideas are non-rival (can be shared without depletion). Ridley cites Alex Tabarrok’s argument that the US patent system, particularly since the 1980s, has become a “patent thicket” that discourages innovation. The number of patents issued has quintupled, but economic growth has slowed. Industries like semiconductors amass “war chests” of patents to litigate. Historically, inventors like Watt, Morse, and Marconi wasted years in court defending patents, which often blocked further innovation. He notes that some inventive industries (e.g., 18th-century English clockmakers, and later the Netherlands and Switzerland) thrived without strong patent systems.

Ridley highlights that most organizational innovations are unpatented but widely copied. He argues that patents, except possibly in pharmaceuticals (due to high R&D and testing costs), raise costs, slow diffusion, and divert resources into litigation (e.g., “patent trolls”). He advocates for weaker or multi-tiered patent systems and suggests that IP law, by creating artificial monopolies, is “a significant drag on innovation and growth.”

When Big Firms Stifle Innovation: The Case of Bagless Vacuum Cleaners

Ridley explains that big firms often stifle innovation due to their internal bureaucracies, vested interests, and complacency. He links this to the broader issue of “rent-seeking” in modern Western economies, where talented individuals are diverted from productive innovation into lucrative professions that profit from lobbying and exploiting regulatory protections. Examples include:

  • The growth of occupational licensing (e.g., interior designers, manicurists), which acts as a barrier to entry, protecting incumbents and hindering entrepreneurial disruption.
  • Land-use planning, which drives up housing costs in innovative cities, discouraging migration to areas of high opportunity.
  • Regulatory capture, where industry lobbies shape rules to favor existing technologies, exemplified by the European Union’s “Ecodesign and Energy Labelling regulations” for vacuum cleaners.

Ridley details how James Dyson’s bagless vacuum cleaner, despite its superior efficiency, faced a five-year legal battle against EU regulations designed by major German manufacturers. The EU stipulated testing for vacuum cleaners with no dust, which favored bagged models that lose suction when full. Dyson eventually won in the European Court of Justice, proving the rule’s discriminatory nature, but the delay allowed Chinese competitors to catch up. This case highlights how bureaucratic inertia and crony capitalism in the EU actively hinder innovative processes, leading to slower economic growth and reinforcing incumbent power, rather than fostering competition and dynamism.

When Investors Divert Innovation: The Case of Permissionless Bits

Peter Thiel’s observation that “bits were unregulated and atoms were regulated” in the mid-2010s highlights how investor capital is diverted by regulatory environments. Software (bits) experienced “permissionless innovation,” where low startup costs and minimal regulation allowed rapid experimentation and growth (e.g., the early internet, enabled by the US’s remarkably libertarian “Framework for Global Electronic Commerce”). In contrast, physical technologies (atoms), particularly in medical devices or drugs, faced prohibitive regulatory costs (billions to get a new drug approved), stifling startups.

This “cost disease” (William Baumol’s concept) means that innovation in one sector (manufacturing) can drive up costs in another (services) if the latter experiences less innovation. When innovation is stifled in physical goods, investor capital flows to sectors with fewer regulatory hurdles, like software. Ridley argues that while regulation can prevent harm (e.g., Thalidomide), excessive or slow regulation deters experimentation and can even lead to more dangerous outcomes by delaying better solutions. He concludes that innovation is universally desired but often resisted in specific cases due to incumbent interests, psychological conservatism, and burdensome legal frameworks (like patents) that actively hinder progress.

Chapter 12: An Innovation Famine

This concluding chapter summarizes Ridley’s core arguments about how innovation works, reflects on its current state, and offers a vision for the future, while warning against complacency.

How Innovation Works: Freedom as the Secret Sauce

Ridley reiterates that the “main ingredient in the secret sauce” for innovation is freedom:

  • Freedom to exchange, experiment, imagine, invest, and fail.
  • Freedom from expropriation or restriction by authorities.
  • Freedom for consumers to reward innovations they like.

He argues that innovation is the “drive chain” that links freedom to prosperity. Because human desires and the means to satisfy them are unpredictable, innovation cannot be easily planned; it must be a free, creative response. This explains why it is:

  • Organic and bottom-up, responding to authentic desires.
  • Collective and collaborative, as no one mind knows enough.
  • Inevitable in retrospect, as the link between desire and satisfaction becomes clear.

Ultimately, innovation thrives in free societies where the default is permission (what is not prohibited is allowed), contrasting with modern trends where governments increasingly dictate what can and cannot be done.

A Bright Future: The Potential of Innovation

Ridley, despite his warnings, remains an optimist about innovation’s potential. He speculates on future breakthroughs by 2050 that could dramatically improve human life and environmental well-being:

  • Affordable, humane care for the elderly driven by AI and telecare.
  • Compression of morbidity through senolytic drugs, robotic surgery, and gene-edited cancer treatments.
  • Stemming the rise of allergies and autoimmune diseases by addressing gut microflora imbalances.
  • Cleaner and safer transport through AI, ride-sharing, and efficient vehicles.
  • Transformation of government and finance through cryptocurrencies and blockchain, leading to less inflation, reduced crime, and fairer taxation.
  • Revolution in wildlife conservation using gene drive technology to control invasive species and gene editing to restore extinct ones.
  • Increased agricultural productivity (50% higher yields) through gene editing and photosynthetic enhancements, leading to more land for nature.
  • Replenished ocean ecosystems and repaired rainforests.
  • Sufficient, low-carbon energy from modular nuclear power (including fusion), carbon capture, and reforestation.

All these possibilities, Ridley asserts, are “easily within the reach of innovation,” contingent on whether humanity chooses to allow them to unfold.

Not All Innovation is Speeding Up: A Nuanced View

Ridley challenges the cliché that “innovation is speeding up every year.” He notes that while communication and computing have been utterly transformed in his lifetime (e.g., smartphones), average travel speed has seen little or no improvement. Planes and cars travel at similar speeds as in the 1950s, with congestion often increasing travel times. Fastest manned plane records (X-15, 1967) remain unbroken, and Concorde is history.

This contrasts with the 1950s/60s, when science fiction envisioned rapid transport advances but overlooked computing. Ridley suspects that the next half-century might see a slowdown in information technology advances and an acceleration in biotechnology, defying current expectations. This nuanced view emphasizes that innovation is not uniform across all sectors and that some areas may reach diminishing returns while others are just beginning their exponential growth phase.

The Innovation Famine: Stagnation in the West

Ridley warns of an “innovation famine” in the Western world, particularly since 2009, leading to slow economic growth. He argues that the “perennial gale of creative destruction” has been replaced by “gentle breezes of rent-seeking.” Symptoms include:

  • Corporate managerialism stifling enterprise, with big companies becoming more bureaucratic and risk-averse.
  • Huge corporate cash piles that are not invested in innovation.
  • Aging corporate assets and a preference for safety over bold ventures.
  • Diffused ownership (pension funds) leading to less “skin in the game” for entrepreneurs, who become rentiers.
  • Increased spending on litigation (e.g., patent enforcement) rather than new products.
  • Declining dynamism in the US economy, with fewer new business formations and longer incumbency of top firms.
  • Even worse in Europe, where “creative destruction has almost ground to a halt,” and no digital giants have emerged to rival US or Chinese firms.

Ridley attributes this stagnation to a “near obsession with precautions,” bureaucratic strangulation, and rules that favor incumbent firms, contrasting with the spirit of dynamism seen in earlier eras. This “troubling reality” suggests that society should fear a lack of innovation more than its abundance.

China’s Innovation Engine: A New Frontier

Ridley acknowledges that China’s innovation engine has “fired up,” potentially leapfrogging the West. He notes that China has moved beyond merely copying, now excelling in:

  • Mobile-first internet use (cashless, app-driven payments).
  • Integrated digital services (e.g., WeChat combining multiple functions).
  • Rapidly falling mobile data costs.
  • Aggressive investment in AI, gene editing, nuclear, and solar energy.
  • Breathtaking infrastructure development (motorways, high-speed rail, data networks).

This furious pace is attributed to the “9-9-6 week” work culture (9 a.m. to 9 p.m., six days a week) among Chinese entrepreneurs, similar to the work ethic seen in early industrializing nations. He suggests that while Silicon Valley might sputter, China is set to dominate innovation in the coming decades. However, Ridley cautions that China’s authoritarian politics and intolerance could eventually stifle innovation, as seen in historical empires, as they make it easy for incumbents to raise barriers to entry.

Regaining Momentum: The Path Forward

Ridley concludes with a call to action for the West, particularly given the implications of the COVID-19 pandemic. He reiterates that innovation is the “child of freedom and the parent of prosperity.” Without it, societies face stagnation, political division, and cultural disenchantment. He hopes that the pandemic, by highlighting the critical value of rapid innovation (e.g., vaccines, diagnostics), will lead to a re-evaluation of regulatory hurdles.

He argues that the “lack of urgency displayed by administrators, consultants, and legal negotiators” in government and public sectors is a major problem, as it is often “not so much that regulators say no, but that they take an age to say yes.” He suggests that reforming the regulatory state to encourage “repeated experiment”—the “perspiration, not the inspiration”—is vital. This could involve:

  • Making permanent the temporary regulatory reforms adopted during the pandemic for medical devices and therapies.
  • Expanding the use of prizes (like the Longitude Prize or the Gates Foundation’s Advance Market Commitment) to incentivize breakthroughs in areas where traditional R&D is unprofitable.
  • Governments buying out patents (as France did for photography) to free up innovation and remove “bottlenecks.”

Ridley ends by emphasizing that innovation is “a very good thing” and that societies abandon it at their peril. The future is thrilling, and it is the “improbability drive of innovation” that will take humanity there, if only we allow it to work.

HowToes Avatar

Published by

Leave a Reply

Recent posts

View all posts →

Discover more from HowToes

Subscribe now to keep reading and get access to the full archive.

Continue reading

Join thousands of product leaders and innovators.

Build products users rave about. Receive concise summaries and actionable insights distilled from 200+ top books on product development, innovation, and leadership.

No thanks, I'll keep reading