Book cover of 'The Thinking Machine' by Stephen Witt, featuring a bright green background, with the subtitle discussing Jensen Huang, NVIDIA, and microchips.

Nvidia Book Summary: Stephen Witt’s Complete Story of the GPU Revolution and the AI Boom

Stephen Witt’s book provides a comprehensive history of Nvidia, tracing its journey from a struggling startup in a Denny’s diner to a $3 trillion titan. The narrative focuses on Jensen Huang, a visionary leader who bet the company multiple times on radical technologies like parallel computing and artificial intelligence. The book explains how the confluence of gaming hardware, academic breakthroughs in neural networks, and massive data center infrastructure created the modern AI era.

Readers gain a deep understanding of the semiconductor industry, the engineering philosophy of “first principles,” and the high-stakes cultural environment of Silicon Valley. It serves as both a biography of Huang and a technical history of the GPU (Graphics Processing Unit), illustrating why Nvidia’s CUDA platform has become a “moat” that competitors struggle to cross.

Introduction: The Most Powerful Man in AI

The story of Nvidia is the story of Jensen Huang, the longest-tenured technology CEO in the S&P 500. Huang is characterized by his “first principles” thinking, extreme work ethic, and an obsession with avoiding bankruptcy. He transformed a niche video game accessory vendor into the world’s primary “arms dealer” for artificial intelligence.

Nvidia’s success rests on its early, unpopular bet on parallel computing. While traditional CPUs process tasks one at a time (serially), Nvidia’s chips solve many problems simultaneously. This architecture proved perfectly suited for Deep Learning, a method of software development that mimics the biological brain. Today, every major AI application, from ChatGPT to Midjourney, runs on Nvidia hardware.

Chapter 1: The Bridge

Jensen Huang’s early life was defined by displacement and resilience. Born in Taiwan and raised in Thailand, he was sent to the Oneida Baptist Institute in Kentucky at age ten. The school was a juvenile reform academy where Huang learned to “toughen up” under difficult circumstances.

Survival at Oneida Baptist Institute

Huang and his brother were the only Asian students in a rural, impoverished Kentucky county. He faced constant racial bullying and physical threats, including being shaken while crossing a rickety pedestrian bridge over a river. Instead of retreating, Huang developed a fierce determination to fight back, often wrestling larger boys to a draw.

The Value of Manual Labor

At the institute, Huang was tasked with cleaning toilets and cutting brush with a scythe. He later credited these chores with teaching him the value of hard work and humility. This period formed the basis of his later management style, which emphasizes mastery of fundamentals and a lack of ego when performing difficult or “lowly” tasks.

Early Intellectual Promise

Despite the language barrier, Huang excelled in academics and was often the best student in his class. He developed a habit of writing in all-capital letters and displayed a preternatural ability to focus. These traits followed him to Oregon, where he eventually reunited with his parents and discovered his love for computer science.

The National Ranking in Table Tennis

While in high school in Oregon, Huang became a nationally ranked table tennis player. He spent his summers practicing at the Paddle Palace in Portland, scrubbing floors at night to pay for tournament fees. His aggressive offensive style and extreme focus on practice became metaphors for how he would later run Nvidia.

Chapter 2: Large-Scale Integration

Huang’s professional life began during the silicon revolution of the 1980s. After studying electrical engineering at Oregon State University, he moved to Silicon Valley to work for Advanced Micro Devices (AMD) and later LSI Logic.

Meeting Lori Mills

Huang met his future wife, Lori Mills, in an introductory electrical engineering lab. He won her over by being her study partner and demonstrating a “superpower” for completing difficult homework. Their partnership became a stabilizing force throughout his high-pressure career, with Lori eventually sacrificing her own engineering career to raise their children.

The Toyota Supra Crash

On the night he proposed to Lori, Huang totaled his Toyota Supra on a snowy mountain road. The car flipped and left Huang with injuries that required a neck brace for months. This event served as an early reminder of the thin line between success and disaster, a theme that would recur throughout his life.

Mastering Circuit Design at LSI Logic

At LSI Logic, Huang worked in “the pit,” a cubicle farm where he mastered SPICE simulation software. He developed a reputation for solving “impossible” technical problems by refusing to get stuck in “dead ends.” He learned how to turn customer orders into reusable methodologies, a concept known as Large-Scale Integration (LSI).

The Power of Very Large-Scale Integration

Huang entered the industry as engineers were transitioning to VLSI (Very Large-Scale Integration), which allowed hundreds of thousands of transistors on a single chip. This required a shift from manual drafting to automated software design tools. Huang’s early exposure to these tools convinced him that software was just as important as hardware in the semiconductor business.

Chapter 3: New Venture

In 1993, Huang met with Chris Malachowsky and Curtis Priem at a Denny’s in San Jose to discuss starting a company. They wanted to bring 3D graphics, then limited to expensive workstations, to the consumer PC market.

The Birth of Nvidia

The company was founded on the belief that video games would become the most computationally demanding consumer application. The founders initially called their project “NV” (New Venture) before settling on Nvidia, derived from the Latin word invidia (envy). They incorporated with just $200 in cash.

The Denny’s Command Center

The founders used the quiet back area of a Denny’s as their initial office, surrounded by police officers filling out reports. Huang used his laptop to build a spreadsheet projecting that the company needed $50 million in annual revenue to survive. The location was eventually abandoned after they noticed bullet holes in the restaurant’s front window.

Securing Sequoia Capital

Huang pitched Don Valentine at Sequoia Capital for seed funding. Although the pitch was initially rocky, Sequoia invested because Wilf Corrigan (the founder of LSI Logic) vouched for Huang’s talent. This funding allowed Nvidia to move out of Curtis Priem’s townhouse and into its first commercial office.

The Quest for 3D Graphics

The primary technical goal was to build a graphics accelerator that could “paint” wire-frame skeletons of objects into 3D polyhedra. In 1993, this was a crowded field with over 35 competitors. Nvidia’s strategy was to build a chip that was not only faster but offered a unique architecture called quadratic texture mapping.

Chapter 4: Thirty Days

Nvidia’s first product, the NV1, was a commercial failure. The company nearly went bankrupt as it struggled to adapt to a changing market dominated by Microsoft’s DirectX standard.

The Failure of the NV1

The NV1 used “quadratic mapping” (curved surfaces) while the rest of the industry, including Microsoft, moved toward triangles. This rendered the NV1 incompatible with most new games. Customers returned the cards in droves, and Nvidia’s cash reserves dwindled to nearly zero.

The Brutal Pivot and Layoffs

Huang was forced to lay off 70% of his staff, reducing the company from 100 people to a skeleton crew of 35. He told the remaining engineers that the company was “30 days from going out of business.” This phrase became a permanent corporate mantra to prevent complacency during future periods of success.

Betting on the Hardware Emulator

With the last of the company’s money, Huang purchased a hardware emulator, a massive machine that could simulate a microchip’s behavior before it was manufactured. This allowed Nvidia to skip the expensive “prototyping” phase and go straight to mass production. This “flight simulator” for chips became Nvidia’s secret weapon for speed.

Success with the Riva 128

The pivot resulted in the Riva 128 (NV3), which embraced the industry-standard triangle architecture. Because of the emulator, Nvidia beat its competitors to market. The Riva 128 sold a million units in four months, saving the company from insolvency and establishing Nvidia as a serious player in the PC market.

Chapter 5: Going Parallel

By 1998, Nvidia had entered a “death struggle” with market leader 3dfx. To win, Huang adopted a strategy of continuous tactical retreat from Intel while doubling down on a revolutionary architecture: Parallel Computing.

The Demotion of Curtis Priem

As the company grew, internal friction between the founders escalated. Curtis Priem, the CTO and chief architect, favored purist technical solutions that often clashed with market realities. Huang eventually marginalized Priem, taking full control of the technical roadmap and business strategy. Priem remained with the company but lost his influence over future chip designs.

Courting the “Geniuses”

Nvidia began a systematic campaign to “poach” the best engineers from failing competitors. Huang kept a list of rival firms on his office whiteboard and checked them off as they went bankrupt or were absorbed. This consolidated the world’s best GPU talent under one roof.

The Partnership with TSMC

Huang secured a manufacturing deal with Morris Chang of Taiwan Semiconductor Manufacturing Company (TSMC). TSMC’s “foundry” model allowed Nvidia to design chips without owning expensive factories. This partnership became the backbone of Nvidia’s supply chain, allowing the company to scale production rapidly.

John Carmack and the Quest for Quake

John Carmack, the lead programmer of Doom and Quake, was the “Jimi Hendrix” of coding. Nvidia built its chips specifically to satisfy Carmack’s demands for faster frame rates. By optimizing for Quake, Nvidia won the loyalty of “hardcore” gamers, who became the company’s most vocal advocates.

Chapter 6: Jellyfish

While Nvidia focused on gaming, a small group of researchers was keeping the dream of Neural Networks alive. This chapter explores the early history of AI and why it was out of favor for decades.

The First AI Winter

In the 1970s and 80s, “symbolic AI” failed to deliver on its promises, leading to a “winter” where funding vanished. Most researchers believed that neural networks (modeled after the brain) were obsolete toys. However, a “renegade” group including Geoffrey Hinton continued to study them in isolation.

The Breakthrough of Backpropagation

In 1986, David Rumelhart and Geoffrey Hinton published a method called backpropagation. This allowed neural networks to “learn” by adjusting their internal weights based on errors. This was the mathematical foundation for modern deep learning, but it lacked the computing power to be effective at the time.

Jellyfish and the Backgammon Revolution

The program Jellyfish used a neural network to conquer the game of backgammon. It learned by playing millions of games against itself, discovering strategies that human experts had missed. This proved that neural networks could innovate, not just imitate, though the broader academic community remained skeptical.

The Importance of Reinforcement Learning

Researcher Gerald Tesauro proved that a neural network could reach expert levels through reinforcement learning (trial and error). This was a major departure from “expert systems” that required human-coded rules. However, the lack of massive datasets and powerful hardware meant that neural networks remained limited to games like backgammon and poker.

Chapter 7: Deathmatch

Nvidia’s GeForce line, launched in 1999, established the “GPU” as a new category of processor. It focused on “Transformation and Lighting” (T&L), moving complex calculations from the CPU to the graphics card.

The “Fatal1ty” Edge

Professional gamer Johnathan “Fatal1ty” Wendel used Nvidia hardware to gain a milliseconds-long advantage in multiplayer Quake. In high-level play, higher frame rates translated directly into faster reaction times. This created a culture where serious gamers felt they had to buy Nvidia to remain competitive.

Inventing the “GPU”

Nvidia’s marketing team, led by Dan Vivoli, coined the term “Graphics Processing Unit” (GPU) to differentiate the GeForce from standard accelerators. They wanted to signal that the chip was a co-processor equal in importance to the CPU. The goal was to build a “Matrix” where reality and simulation were indistinguishable.

The Fall of 3dfx

Nvidia used its rapid six-month shipping cycle to overwhelm 3dfx, which struggled with manufacturing delays. When 3dfx attempted to sue Nvidia for patent infringement, Huang counter-sued and stalled the legal proceedings. In 2000, a bankrupt 3dfx was forced to sell its assets to Nvidia, ending the first great graphics war.

The “Darth Vader” Reputation

Inside the industry, Huang acquired a reputation for being unapologetically carnivorous. Rivals called him “Darth Vader” because of his aggressive hiring tactics and legal maneuvering. However, Huang maintained that he was simply doing whatever was necessary to avoid going out of business.

Chapter 8: The Compulsion Loop

Nvidia went public in 1999, making Huang a centimillionaire. However, the company soon faced a series of crises, including an SEC investigation and the “Bumpgate” manufacturing defect.

The $20 Billion Valuation

At its peak in the early 2000s, Nvidia was more valuable than Enron and was added to the S&P 500. The stock’s success led Huang to get the Nvidia logo tattooed on his arm, a move he later joked about because of the physical pain.

Friction with Microsoft and Xbox

Nvidia provided the chip for the original Microsoft Xbox. However, the relationship soured due to pricing disputes and Microsoft’s desire to control the “intellectual property.” When Microsoft moved to ATI for the Xbox 360, it was a major blow to Nvidia’s prestige and revenue.

The “Bumpgate” Disaster

In 2008, Nvidia discovered a defect in the soldering “bumps” of its laptop chips, causing them to fail when they overheated. This led to a $200 million write-off and a collapse in the stock price. Huang handled the crisis by publicly apologizing and setting aside a massive reserve for refunds, a move that eventually restored customer trust.

Mastering the “Compulsion Loop”

Nvidia’s growth was driven by the “compulsion loop” of PC gaming. Games like World of Warcraft and Half-Life 2 created an endless demand for more power. Gamers became “enthusiasts” who treated their PCs like muscle cars, constantly upgrading their hardware to achieve maximum performance.

Chapter 9: CUDA

In 2006, Nvidia launched CUDA (Compute Unified Device Architecture). This was Huang’s most significant “zero-billion-dollar market” bet, allowing the GPU to be used for general-purpose scientific computing.

Ian Buck and the Wall of Projectors

Graduate student Ian Buck realized that GPUs could be hacked to perform scientific calculations. He built a system that ran Quake across 32 projectors, essentially creating a DIY supercomputer. Huang hired Buck to turn this “hack” into a formal programming language.

The Death of Moore’s Law

Nvidia architect John Nickolls predicted that Moore’s Law (the doubling of transistors every two years) was slowing down due to heat and power limits. He convinced Huang that the only way to continue increasing computer power was through parallelism. This meant moving away from one fast processor toward thousands of slower ones working in concert.

The “CUDA Tax”

Huang decided to include CUDA cores on every single gaming chip Nvidia sold. This was expensive and initially had no market, leading investors to criticize the “CUDA tax.” However, Huang believed that if Nvidia didn’t reinvent itself as a computing platform, it would eventually be commoditized by Intel.

Searching for a “Killer App”

For the first few years, CUDA was a commercial failure. The only customers were niche researchers, such as doctors simulating mammograms. Wall Street analysts argued that Huang was wasting billions of dollars on a “science project” with no clear path to profitability.

Chapter 10: Resonance

Bill Dally, the chair of Stanford’s computer science department, joined Nvidia as Chief Scientist in 2009. He helped refine the company’s vision for High-Performance Computing (HPC).

The “Speed of Light” Philosophy

Huang developed a management concept called the “speed of light.” He encouraged managers to identify the absolute fastest a task could be completed if everything went perfectly. They then worked backward from this “unattainable ideal” to set aggressive real-world deadlines.

The First GTC Conference

In 2009, Nvidia hosted its first GPU Technology Conference (GTC). Huang dubbed it the “Woodstock of high-performance computing.” While it drew mostly scientists and academics, it established Nvidia as a thought leader in the transition from graphics to general-purpose computing.

The Law of “Arithmetic Intensity”

Bill Dally argued that as data grew, the bottleneck in computing shifted from the processor to the memory. Parallel computing solved this by moving data in massive “chunks” rather than one bit at a time. This made GPUs thousands of times faster than CPUs for certain mathematical tasks.

The “Zero-Billion-Dollar Market” Strategy

Huang pursued markets that didn’t yet exist, such as self-driving cars and robotic surgery. He believed that if he built the hardware and the software tools, the customers would eventually arrive. This strategy required extreme patience and a willingness to see the stock price “flatline” for a decade.

Chapter 11: AlexNet

In 2012, the world of AI changed forever when Alex Krizhevsky and Ilya Sutskever used Nvidia GPUs to win the ImageNet competition.

The University of Toronto Rebels

Working under Geoffrey Hinton, Krizhevsky and Sutskever built a neural network called AlexNet. Because they couldn’t afford a supercomputer, they used two retail Nvidia gaming cards ($500 each) in Krizhevsky’s bedroom.

The 2012 ImageNet Breakthrough

AlexNet didn’t just win the competition; it crushed the “state of the art” by a margin of 10%. It proved that neural networks, when trained on GPUs, could recognize objects with human-like accuracy. This was the “Big Bang” moment for modern AI.

The Million-Dollar “Aution”

Hinton, Krizhevsky, and Sutskever founded a “company” with no assets other than their brains and held an auction via email. Google won the bidding war for $44 million, beating out Microsoft and Baidu. This signaled that the “AI talent war” had begun in earnest.

Transforming the GPU into an AI Engine

Following the ImageNet results, every major AI researcher switched to Nvidia hardware. The GPU was no longer just for games; it was for Deep Learning. Huang realized that the “killer app” for CUDA had finally arrived.

Chapter 12: O.I.A.L.O.

In 2013, Huang made the decision to pivot the entire company to AI. He called it a “Once in a Lifetime Opportunity” (O.I.A.L.O.).

Bryan Catanzaro and the Cat Experiment

Nvidia researcher Bryan Catanzaro worked with Andrew Ng to replicate Google’s “cat recognition” experiment. While Google used 2,000 CPUs, Catanzaro achieved the same result with just 12 GPUs. This proved the massive efficiency advantage of Nvidia’s architecture.

Betting the Company (Again)

Huang sent a company-wide email on a Friday night declaring that Nvidia was no longer a graphics company but an AI company. He cleared the whiteboard in his office and wrote “O.I.A.L.O.” He ordered every engineer in the company to learn how to program neural networks.

The Development of cuDNN

Nvidia developed cuDNN, a software library that optimized neural network training on GPUs. By giving this software away for free, Nvidia created a “lock-in” effect: researchers who learned on cuDNN were unlikely to switch to rival hardware. Software became Nvidia’s primary moat.

Resisting the Activist Investors

When the activist hedge fund Starboard Value tried to force Nvidia to cut its research spending, Huang refused. He spent weekends reading about AI and convinced the board that the “CUDA tax” was about to pay off. Starboard eventually sold its stake and missed the greatest stock run in history.

Chapter 13: Superintelligence

The rise of AI led to concerns about “existential risk.” In 2015, Elon Musk and Sam Altman founded OpenAI as a nonprofit to protect humanity from rogue superintelligence.

The “Paper-Clip Maximizer” Worry

Philosopher Nick Bostrom popularized the idea that an AI might accidentally destroy humanity while pursuing a seemingly harmless goal (like making paper clips). Musk and other “doomers” feared that AI would eventually “outthink its makers” and become uncontrollable.

The Founding of OpenAI

OpenAI was created to “open source” AI research and prevent any one company (like Google) from having a monopoly on superintelligence. Nvidia supported the project by donating its first DGX-1 supercomputer to the group, which Huang personally delivered and signed.

Elon Musk vs. Jensen Huang

At the 2015 GTC conference, Musk and Huang shared a stage. While Musk spoke of “summoning the demon,” Huang focused on the practical benefits of AI in cars and medicine. This highlighted the split in Silicon Valley between those who feared AI and those who saw it as a pure force for progress.

The “Intelligence Factory” Concept

Huang began to describe data centers as “AI factories.” Just as the industrial revolution used factories to turn raw materials into goods, the AI revolution would use GPUs to turn raw data into intelligence. This was a fundamental shift in how the world viewed “computing.”

Chapter 14: The Good Year

2016-2017 was a period of explosive growth. Nvidia’s stock tripled as the company dominated gaming, scientific computing, and the emerging Nintendo Switch console.

The Nintendo Switch Victory

Nvidia beat out AMD to supply the chip for the Nintendo Switch. While consoles were traditionally low-margin, the Switch’s massive success provided a stable revenue stream and proved that Nvidia’s mobile chips (Tegra) were world-class.

Lisa Su and the Family Rivalry

AMD’s new CEO, Lisa Su, is a distant relative of Jensen Huang (first cousins once removed). Under her leadership, AMD emerged as Nvidia’s only real competitor in the GPU space. However, Huang maintained a massive lead in software, making it difficult for AMD to gain ground in AI.

The Pascal Architecture

Nvidia’s Pascal architecture introduced 16-bit floating point math, which was faster and more efficient for AI training than traditional 64-bit precision. This was a key “first principles” move: Huang realized that AI didn’t need to be perfectly precise; it just needed to be fast.

The $100 Billion Market Cap

By 2017, Nvidia had surpassed the $100 billion valuation mark. The company was no longer a “niche vendor” but a pillars of the global economy. Huang’s decade of “stagnation” and “science projects” had resulted in a near-monopoly on the hardware needed for the future.

Chapter 15: The Transformer

In 2017, Google researchers published “Attention Is All You Need,” introducing the Transformer architecture. This paved the way for Large Language Models (LLMs) like GPT.

The End of Recurrent Networks

Traditional AI (RNNs) processed text one word at a time, making it slow and difficult to parallelize. The Transformer allowed the computer to “pay attention” to all words in a sentence simultaneously. This was a perfect match for Nvidia’s parallel GPUs.

“Attention Is All You Need”

The Transformer architecture allowed AI to understand context. It could figure out that the word “bank” meant something different in “river bank” versus “money bank” by looking at the surrounding words. This was the breakthrough that made GPT-1 and GPT-2 possible.

The Birth of GPT

OpenAI researcher Ilya Sutskever realized that if you scaled the Transformer architecture with enough data and enough GPUs, it would develop “emergent” properties like the ability to write poetry or code. This led to the “generative AI” boom.

Google’s “Innovator’s Dilemma”

Although Google invented the Transformer, it was hesitant to release it because it might threaten its search business. This allowed startups like OpenAI to use the technology to disrupt the market. Nvidia benefited from both sides, selling chips to Google for research and to OpenAI for deployment.

Chapter 16: Hyperscale

As AI models grew, the world entered the era of Hyperscale. Companies like Microsoft and Amazon began building massive data centers filled with hundreds of thousands of GPUs.

The Mellanox Acquisition

Huang spent $7 billion to acquire Mellanox, a company that specialized in high-speed networking (Infiniband). He realized that in a supercomputer, the “connection” between chips was just as important as the chips themselves. This allowed Nvidia to build “system-level” computers.

The “CUDA Moat” Solidifies

By 2020, over 2 million developers were using CUDA. Even though competitors like Intel and AMD offered cheaper chips, the cost of switching software was too high for most companies. Nvidia had successfully turned a hardware product into an indispensable software ecosystem.

The $40 Billion ARM Bid

Nvidia attempted to buy ARM, the company that designs the chips for almost every smartphone in the world. However, the deal was blocked by global regulators who feared that Nvidia would become too powerful. Huang abandoned the deal but continued to integrate ARM designs into his AI servers.

The Sovereign AI Trend

Nations began to realize that AI was a matter of national security. Countries like the UK, Japan, and France started building their own “Sovereign AI” clouds using Nvidia hardware to ensure their data remained under their own control.

Chapter 17: Money

The launch of ChatGPT in November 2022 triggered a global investment frenzy. Nvidia’s stock went vertical as companies scrambled to build their own generative AI models.

The ChatGPT “Spiritual Experience”

ChatGPT reached 100 million users in two months. Its ability to answer complex questions and write human-sounding text felt like magic. This was the first time the general public realized the power of the AI that Nvidia had been building for a decade.

Microsoft’s $10 Billion Bet

Microsoft invested heavily in OpenAI, securing exclusive access to its models. Most of this money eventually flowed back to Nvidia, as Microsoft built the world’s largest AI supercomputer (Project Mack Truck) to run OpenAI’s models.

The “Marginal Cost of Zero”

Huang argued that AI would reduce the “marginal cost of intelligence” to near zero. He believed this would spark a productivity boom across every industry, from drug discovery to legal services. This optimism drove Nvidia’s valuation past $1 trillion.

The GPU as Collateral

Nvidia’s H100 chips became so valuable that they were used as collateral for multi-billion dollar loans. In Silicon Valley, “GPU-rich” companies (those with massive Nvidia clusters) became the new elite, while “GPU-poor” startups struggled to compete.

Chapter 18: Spaceships

Nvidia’s new headquarters, Endeavor and Voyager, reflect the company’s obsession with 3D geometry and “first principles” design.

Designing the “Spaceships”

The buildings are in the shape of triangles (the fundamental unit of 3D graphics). Huang used a VR headset to personally inspect the design, ensuring that every detail—from the skylights to the floor grates—was optimized for light and collaboration.

The “Living Wall” and No Executive Suites

Nvidia’s HQ has no traditional executive offices. Huang works out of a common conference room to remain “in the flow” of traffic. The open-plan design is intended to foster “collision” between engineers from different departments.

The “Digital Twin” of Earth

Huang is building Omniverse, a platform to create “digital twins” of everything from factories to the entire planet. He believes that in the future, every physical object will be “simulated” in the Omniverse before it is built in reality.

Tracking Employees with AI

Nvidia uses its own AI to monitor its headquarters, optimizing everything from janitorial schedules to energy usage. The company is “eating its own dog food,” using its technology to run its business at the “speed of light.”

Chapter 19: Power

The massive energy demands of AI have become a major global challenge. This chapter explores the “bottleneck” of the electrical grid.

The Gigawatt Data Center

Modern AI data centers require more power than entire cities. This has led companies like Microsoft and Meta to explore nuclear power and custom grid upgrades. The “bottleneck” of AI is no longer the chip; it’s the transformer on the utility pole.

Nvidia’s Climate Forecasting

Despite the high energy usage, Huang argues that AI will help solve climate change. Nvidia’s Earth-2 model can predict extreme weather events with unprecedented accuracy, allowing nations to prepare for the impacts of a warming planet.

The “Liquid Cooling” Transition

Because Nvidia’s new chips (Blackwell) generate so much heat, the industry is moving from air cooling to liquid cooling. This requires a fundamental redesign of data center infrastructure, another transition that Nvidia is leading.

The “Efficiency” Paradox

Nvidia researchers point out that while AI uses a lot of power, it is thousands of times more efficient than traditional computing. One GPU can do the work of thousands of CPUs, potentially reducing the overall “carbon footprint” of global IT in the long run.

Chapter 20: The Most Important Stock on Earth

In early 2024, Nvidia’s earnings reports became the “most important data points in the world economy.” The company’s valuation rivals Apple and Microsoft.

The $277 Billion Single-Day Gain

In February 2024, Nvidia added more value in one day than the entire market cap of Coca-Cola. This was the largest single-day accumulation of wealth in Wall Street history.

Jensen’s “Haiku” Management

Huang manages a company of 30,000 people with over 50 direct reports. He has no COO or Chief of Staff. He communicates via hundreds of short, direct emails per day, expecting his vice presidents to be “autonomous agents” who solve problems without constant supervision.

The “Financial Volunteers”

Because of the stock’s growth, many long-time Nvidia employees are now worth tens of millions of dollars. Huang encourages them to stay not for the money, but for the “life’s work” of building the future of intelligence.

The Pivot to the “AI Factory”

Huang has stopped calling Nvidia a “chip company.” He now calls it an “AI factory company.” He believes that every company in the future will have an “AI factory” in the basement, just as they once had a boiler room or a server room.

Chapter 21: Jensen

The personal brand of Jensen Huang—the black leather jacket, the tattoo, the Denny’s roots—has become a symbol of the new industrial revolution.

The Twenty-Four Black Shirts

Huang wears the same outfit every day to reduce “decision fatigue.” He credits his wife and daughter with his “glow-up,” transitioning from a “nerdy state-school student” to a global fashion icon and the face of AI.

The “Marginal Cost of Math”

Huang often asks: “When the marginal cost of doing math goes to zero, what do you do?” He believes that AI will automate the “mundane” parts of engineering and science, allowing humans to focus on higher-level creativity and problem-solving.

The Family Business

Both of Huang’s children, Spencer and Madison, eventually joined Nvidia. Despite his wealth, Huang maintains a “super normal” home life, often cooking for his family and taking his dogs for walks in the hills of Los Altos.

A Legacy of “Resonance”

Huang’s success is attributed to his “resonance” with his customers. He listens to researchers and gamers with the same intensity, ensuring that Nvidia’s products always meet the “pulse” of the market. He remains “paranoid” that the company is always thirty days from failure.

Chapter 22: The Fear

Despite the success, “pioneers” like Yoshua Bengio and Geoffrey Hinton are increasingly worried about AI safety.

The “p(doom)” Debate

“p(doom)” is the probability that AI will destroy humanity. While researchers like Bengio have a high p(doom) (up to 50%), Huang remains a firm AI optimist. He believes that AI will always be a “tool” under human control.

The Split in the Turing Trio

Geoffrey Hinton quit Google to warn about AI risks, creating a rift with his long-time collaborator Yann LeCun, who believes AI “doomerism” is ridiculous. This debate has split Silicon Valley into “accelerationists” and “decelerationists.”

The Veto of SB 1047

California attempted to pass a law (SB 1047) to regulate large AI models. It was opposed by Nvidia and most of the tech industry, who argued it would “kill innovation.” The bill was eventually vetoed by Governor Gavin Newsom, signaling a win for the accelerationists.

The “Alignment” Problem

OpenAI’s Ilya Sutskever left the company to focus on “alignment”—ensuring that superintelligence shares human values. This remains the biggest unsolved problem in computer science: how to build something smarter than us that won’t eventually ignore or harm us.

Chapter 23: The Thinking Machine

The book concludes with the launch of the Blackwell chip in 2024. Huang believes we have reached the “limit of calculation,” moving from arithmetic to reasoning.

The “GTC 2024” Hockey Arena

Huang’s keynote took place at an NHL arena, attended by 17,000 people. He presented the Blackwell GPU, which has 208 billion transistors and can train AI models in minutes that previously took months.

The Era of “Inference”

The focus of the industry is shifting from “training” (learning) to “inference” (thinking). Huang believes that in the future, every computer interaction—from emails to driving—will be an “inference” performed by an AI.

Jensen’s “Serious Work”

When asked about AI risks, Huang grew angry, insisting that he is a “serious person doing serious work.” He rejects the “sci-fi” narratives of AI takeover, arguing that Nvidia is simply making the world’s most useful tools.

The Future of Humanity

Nvidia is no longer just a company; it is the infrastructure of the future. Whether AI saves us or destroys us, it will happen on the “silicon mazes” designed by Jensen Huang. The book ends with a vision of a world where the “marginal cost of intelligence” has truly reached zero.

Key Takeaways: What You Need to Remember

Core Insights from Nvidia

  • Parallelism is Everything: The shift from serial (CPU) to parallel (GPU) processing was the fundamental enabler of modern AI.
  • Software is the Moat: Nvidia’s success isn’t just about chips; it’s about the CUDA software ecosystem that millions of developers rely on.
  • The “First Principles” Approach: Jensen Huang succeeds by ignoring industry trends and focusing on the physical limits of what silicon and electricity can do.
  • Niche Today, Mainstream Tomorrow: Nvidia found its “killer app” in AI after spending a decade serving tiny groups of researchers in “zero-billion-dollar markets.”

Immediate Actions to Take Today

  • Understand the “Speed of Light”: Apply Huang’s scheduling philosophy to your own projects by identifying the theoretical maximum speed of a task and working backward.
  • Invest in Foundations: Focus on building skills or platforms that solve fundamental problems (like data processing or intelligence) rather than chasing fleeting trends.
  • Adopt a “Day One” Mindset: Live by the mantra that your business is “30 days from failure” to avoid the complacency that kills established firms.

Questions for Personal Application

  • Am I ignoring an “unpopular” technology today that could become the foundation of my industry tomorrow?
  • Is my current workflow optimized for the “serial” past or the “parallel” future?
  • How would my decision-making change if I assumed the “marginal cost of intelligence” was going to zero?
HowToes Avatar

Published by

Leave a Reply

Recent posts

View all posts →

Discover more from HowToes

Subscribe now to keep reading and get access to the full archive.

Continue reading

Join thousands of product leaders and innovators.

Build products users rave about. Receive concise summaries and actionable insights distilled from 200+ top books on product development, innovation, and leadership.

No thanks, I'll keep reading