
The Great Mental Models: General Thinking Concepts by Shane Parrish – Unlock Smarter Decisions and a Better Life
Shane Parrish’s “The Great Mental Models: General Thinking Concepts” offers readers a profound journey into understanding how the world truly works, thereby enabling better decision-making and a more fulfilling life. Through the lens of mental models – powerful, time-tested ideas from diverse disciplines – Parrish argues that the quality of our thinking directly correlates with the models residing in our minds. By learning to see reality as it is, rather than as we wish it to be, we can uncover hidden opportunities, sidestep costly errors, and make tangible progress in all aspects of our lives. This book, the first in a series, serves as a foundational guide, meticulously breaking down core mental models from fields like biology, physics, chemistry, economics, and systems, presenting them in clear, accessible language to help you master the best of what others have already discovered. We’ll explore every significant idea, example, and insight, ensuring you gain a comprehensive grasp of these invaluable thinking tools.
The author, Shane Parrish, founder of Farnam Street, shares his personal transformation, spurred by the inadequacy of traditional education in preparing him for complex, real-world decisions. Thrust into high-stakes roles at an intelligence agency post-9/11, he realized his computer science degree offered little guidance on navigating human dynamics and impactful choices. His quest for better decision-making led him to the wisdom of Charlie Munger, Warren Buffett’s business partner, who advocated for a “broad latticework of mental models” from various disciplines. This revelation became the driving force behind Farnam Street and this book series, aimed at equalizing opportunity by making high-quality, multidisciplinary education freely available. This summary will delve into each foundational mental model, exploring its definition, application, and the profound impact it can have on your understanding and actions.
Introduction: Acquiring Wisdom
This chapter introduces the fundamental premise of the book: that acquiring wisdom, defined as the skill for finding the right solutions for the right problems, hinges on building a latticework of mental models. It emphasizes that thinking better is not about being a genius, but about using processes that uncover reality and lead to informed choices. The ultimate goal is to avoid problems by understanding how the world works and adjusting our behavior accordingly, rather than just solving them after they arise.
The Power of Mental Models
Mental models are representations of how something works, simplifying complexity into understandable chunks that shape our thinking, understanding, and beliefs, often subconsciously. They help us infer causality, match patterns, and draw analogies. While millions of models exist, this series focuses on the “all-star team” – those with the broadest utility across various life situations. Volume One specifically introduces nine general thinking concepts, described as useful tools that allow us to view situations through different lenses, revealing layers of reality and enabling rational decisions even without a clear path. The core idea is that fundamentals of knowledge are universally available across disciplines, and understanding these principles helps us navigate the universe effectively.
Why Multidisciplinary Thinking Matters
Not having a multidisciplinary mindset creates blind spots, making us vulnerable to mistakes that compound into catastrophes. By drawing on diverse knowledge from mental models across fields like biology, physics, and economics, we can minimize risk and increase freedom. Understanding reality means breaking problems into substantive parts to reveal interconnections, leading to better actions. Simple problems may need few lenses, but complicated, multidimensional issues greatly benefit from a wider array of models. The more lenses applied, the more reality is revealed, leading to greater understanding and clearer actions.
Staying Grounded in Reality
The text highlights the importance of constantly testing understanding against reality and updating it. Just as the mythical Antaeus lost strength when separated from Mother Earth, our understanding weakens when it loses contact with real-world feedback. Pontificating without applying ideas is ineffective; genuine learning comes from putting ideas into action and observing the results. If we don’t test our ideas against the real world, we cannot be sure of our understanding.
Overcoming Barriers to Learning
The biggest obstacle to learning from reality is ourselves, due to inherent blind spots. The author identifies three key flaws that prevent us from updating our beliefs:
- Perspective: Like Galileo’s ship analogy, we often can’t see the full system we’re operating within without an external vantage point. We need to be open to other perspectives to understand the true results of our actions.
- Ego: Our ego makes us resistant to feedback, fearing criticism or being wrong. This prevents us from putting ideas out there or, once challenged, leads to defending rather than upgrading our ideas. Honest self-assessment is crucial.
- Distance from Consequences: The further we are from the direct results of our decisions, the easier it is to maintain flawed views. Immediate feedback (like touching a hot stove) forces quick updates, but at higher, more abstract levels, ego creates narratives that suit our desires instead of reality.
These flaws lead to repeating mistakes and prevent learning through reflection. The solution lies in actively seeking feedback and being willing to admit when we are wrong, prioritizing long-term happiness over short-term ego protection.
The Value of Simple Ideas
The book posits that we tend to undervalue elementary, simple ideas and overvalue complicated ones, often due to a professional need for specialized knowledge. However, simple ideas from fundamental disciplines are of great value because they help prevent complex problems and apply universally. “Most geniuses… prosper not by deconstructing intricate complexities but by exploiting unrecognized simplicities.” These elementary principles form a time-tested foundation, allowing us to understand underlying dynamics even when details change, and adapt our tactics accordingly.
Mental Models in Practice
Using the example of gravity, the text explains how mental models, even if not fully understood in detail, allow us to anticipate and explain phenomena. A mental model of gravity helps us design bridges, understand planetary motion, and even use metaphors like “pulled into her orbit.” The crucial point is that all models are flawed to some extent. Some are reliable in specific situations but useless in others, while some are unreliable or plain wrong. The goal is to identify reliable and useful models and discard or update flawed ones, as they lead to misunderstandings, suboptimal actions (like bloodletting), and avoidable errors.
Building Your Latticework
The quality of our thinking is influenced by the variety and accuracy of mental models in our heads. Specialization often leads to overusing a few familiar models, creating blind spots (e.g., an engineer seeing only systems, a psychologist only incentives). The “blind men and the elephant” parable illustrates this: each expert grasps only a part of the truth. To overcome this, we need a “latticework of mental models,” where models connect and reinforce each other, reducing blind spots and enabling a more well-rounded understanding. This interdisciplinary approach, championed by Charlie Munger, suggests that 80-90 important models can cover most of what’s needed to be “worldly-wise.”
Continuous Improvement and Application
Successfully applying mental models requires time, effort, and continuous practice. It’s not enough to just know them; we must use them in decisions, reflecting on the process and outcomes. This means being deliberate about choosing models, recording experiences, and seeking feedback. Failures, when acknowledged and learned from, also build expertise. The process might initially be slower and imperfect, but over time, it becomes more efficient and effective. The ultimate goal is to align our decisions with how the world really is, leading to more confidence, success, time, less stress, and a more meaningful life.
The Map is not the Territory
This chapter introduces the critical mental model that the map of reality is not reality itself. It emphasizes that even the best maps are imperfect reductions of what they represent, serving as snapshots in time that may not reflect current conditions. Understanding this distinction is crucial for navigating complex problems and making better decisions.
Understanding the Concept
Coined and popularized by mathematician Alfred Korzybski in 1931, the concept “the map is not the territory” means that the description of a thing is not the thing itself. A model is not reality; an abstraction is not the abstracted. Key characteristics of maps include:
- Structure: A map may have a similar or dissimilar structure to the territory. The London underground map, for example, is useful for travelers but not for train drivers. Maps have a specific purpose and cannot be everything to everyone.
- Logical Characteristics: Similar structures imply similar “logical” characteristics. If a map correctly shows Dresden between Paris and Warsaw, that relationship holds true in reality.
- Distinct from Territory: The map is not the actual territory. The underground map doesn’t convey the experience of being in a station.
- Self-Reflexiveness: An ideal map would endlessly contain maps of maps, but this level of detail would be overwhelming and impractical.
We constantly consume abstractions (like news articles) created by others, which simplify vast information. The danger is that something is lost in the process – specific, relevant details. When we treat these abstractions as gospel without doing our own mental work, we inadvertently forget that the map is not reality.
The Dangers of Mistaking the Map for Reality
Mistaking the map for reality leads to problems: we assume we have all the answers, create static rules or policies based on a fixed map, and forget that the world is dynamic. This closes off feedback loops, reducing our ability to adapt to changing environments. If the goal becomes simplification over understanding, we make bad decisions. Maps should be flexible and dynamic, adapting as territories change.
The example of Newtonian physics illustrates this. For centuries, it was an incredibly useful map, explaining gravity and celestial motion. However, Albert Einstein’s theory of Special Relativity created a new, more accurate map, demonstrating that Newtonian physics had limitations. Physicists understand these limits, carefully delimiting where each map is useful. When they encounter uncharted territory, like quantum mechanics, they explore it carefully instead of assuming existing maps explain everything.
Limitations of Maps
Maps can’t show everything, and a significant problem is that risks of the territory are often not shown on the map. To truly understand a model, map, or reduction, we must understand and respect its limitations. If we don’t know what a map does and doesn’t tell us, it can be useless or even dangerous.
The Tragedy of the Commons model, though useful for illustrating how shared resources can be overused due to bad incentives, can be dangerous if applied as a universal truth without considering real-world solutions. Elinor Ostrom warned that the constraints assumed fixed in models are often taken as fixed in reality, leading to flawed public policy. Models are tools for exploration, not doctrines. As George Box noted, “all models are wrong; the practical question is how wrong do they have to be to not be useful.”
Important Considerations for Using Maps
To use maps and models accurately, three considerations are crucial:
- Reality is the ultimate update: Maps should be constantly updated based on real-world experiences. Stereotypes, for instance, are simplified maps of people; their danger lies in forgetting the territory (individuals) is far more complex. Karimeh Abbud’s photography in Palestine provided a “new map” by capturing middle-class life as she saw it, rejecting European ethnographic styles, and offering a different, more nuanced historical perspective. Maps capture a moment in time, and their accuracy wanes with rapid change in the territory.
- Consider the cartographer: Maps are not objective; they reflect the values, standards, and limitations of their creators. National boundaries on world maps, for example, often reflect political interests (like the Sykes-Picot line in the Middle East) rather than objective geographical or cultural realities. Understanding the cartographer’s intent and context helps interpret the map.
- Maps can influence territories: Jane Jacobs famously chronicled how city planners, with their elaborate models, tried to force cities to fit these models, leading to negative consequences. Her work, The Death and Life of Great American Cities, serves as a cautionary tale of what happens when faith in a model dictates changes in reality, rather than the model adapting to reality.
Examples of Map/Territory Dynamics
The model of management evolved from Frederick Taylor’s Scientific Theory of Management, effective for factories, to more nuanced approaches as the economy changed. Taylor’s model became less useful as workers adapted to incentives, competitors adopted similar methods, and the context shifted from factory to office settings. It also failed to account for human motivations beyond financial ones. Better models emerged over time, showing that a map’s utility is context-dependent.
Lewis Carroll’s Sylvie and Bruno humorously illustrates the absurdity of a “one mile to one mile” map, which would be perfectly accurate but utterly useless for navigation. Maps are necessary to condense territory, but this compression inherently introduces flaws.
Circle of Competence
This chapter introduces the concept of a “circle of competence,” highlighting that understanding what you truly know (and what you don’t) is paramount for effective decision-making. When ego, rather than competence, drives action, it creates blind spots, leading to poor outcomes. Conversely, honesty about knowledge gaps allows for improvement and better results.
What is a Circle of Competence?
A circle of competence is the area of knowledge and skill that you genuinely possess, built over years of experience, study, and reflection. The distinction between a “Lifer” (someone with deep, intimate knowledge of a specific domain) and a “Stranger” (someone with superficial understanding) illustrates this. The Lifer, like the long-time town resident, has a detailed, interconnected web of information, understands nuances, anticipates objections, and has multiple solutions to problems. The Stranger, like the new city slicker, may quickly grasp basics but lacks the depth to make truly informed decisions, leading to overconfidence and increased risk. True competence requires more than skimming the surface; it demands years of engagement and learning from failures.
How to Identify Your Circle of Competence
Within your circle, you know precisely what you don’t know. You can make decisions quickly and accurately, understand what information is needed (and what is unobtainable), and distinguish between the knowable and unknowable. You can anticipate objections and draw on diverse resources due to your deep fluency.
The daunting task of climbing Mount Everest serves as a powerful example. For most, it’s outside their circle of competence because they don’t even know what they don’t know. The specialized knowledge of Sherpas, like Tenzing Norgay, who spent decades learning the mountain through experience and “lucky failures,” exemplifies true competence. Their intimate understanding of the terrain, weather, and human limits is invaluable. Attempting such a feat without respecting the Sherpas’ deep knowledge is a recipe for disaster, as evidenced by the many bodies preserved on the mountain. As Alexander Pope wrote, “A little learning is a dangerous thing.”
Building and Maintaining a Circle of Competence
A circle of competence is dynamic, not static, and requires continuous effort:
- Curiosity and Desire to Learn: Learning is a product of experience meeting reflection. You can learn from your own experiences, or more productively, from the experiences of others through books, articles, and conversations. Always approach your circle with curiosity, seeking to expand and strengthen it.
- Monitoring: You must honestly monitor your track record in areas within (or desired for) your circle. Overconfidence often stems from a lack of honest self-reporting. Keeping a precise journal of performance (e.g., investment trades, leadership decisions) helps identify patterns and mistakes, allowing for reflection and learning. This process is painful for the ego but essential for improvement.
- External Feedback: Periodically soliciting feedback from trusted individuals is critical for both building and maintaining competence. Atul Gawande, a top surgeon, hired a coach to identify subconscious suboptimal techniques, demonstrating the value of an outside perspective to overcome biases and defensiveness.
Operating Outside Your Circle of Competence
Since it’s impossible to be competent in everything, developing strategies for when you’re a “Stranger” is vital:
- Learn the Basics: Acquire fundamental knowledge of the new realm, but always acknowledge your “Stranger” status. Be wary of unwarranted confidence that basic information can bring.
- Consult Experts (Lifers): Talk to someone with strong competence in the area. Instead of just asking for answers, ask detailed and thoughtful questions to learn how they “fish.” When seeking advice, especially in high-stakes situations, probe the limits of their expertise and consider how their incentives might influence the information they provide.
- Apply General Mental Models: Use a broad understanding of basic mental models from other disciplines to augment your limited knowledge. These models can help identify foundational concepts and serve as a guide in unfamiliar territory.
The problem of incentives is a critical consideration when relying on others’ competence. Financial advisors, mechanics, or salespeople may have incentives that don’t align with your best interests. Understanding their compensation arrangements or what they stand to gain is crucial. Queen Elizabeth I serves as an excellent historical example of operating wisely outside her initial circle of competence. Despite a precarious political situation, she explicitly stated her intent to “direct all my actions by good advice and counsel.” She built a diverse, small Privy Council of trusted advisors, blending old and new perspectives, fostering open debate. Her ability to admit what she didn’t know and seek varied counsel led to a stable, prosperous reign, laying the groundwork for an empire.
Supporting Idea: Falsifiability
This section introduces Karl Popper’s concept of falsifiability, a cornerstone of empirical science. Popper argued that a theory is part of empirical science “if and only if it conflicts with possible experiences and is therefore in principle falsifiable by experience.”
The Core of Falsifiability
For a theory to be scientific, it must have an element of risk – it must be able to be proven wrong under stated conditions. This means one should be able to articulate: “If x happens, it would show demonstrably that theory y is not true.” If observation shows the predicted effect is absent, the theory is refuted. Falsification is the opposite of verification; actively trying to prove a theory incorrect actually strengthens it if you fail to do so. Evolution serves as an example: natural selection eliminates what doesn’t work, strengthening the fitness of the population.
Distinguishing Science from Pseudoscience
Popper used Freud’s psychoanalytic theory as an example of a non-falsifiable theory. While it might contain kernels of truth, it cannot be proven true or false because it doesn’t make specific, testable predictions that risk being wrong. For it to be scientific, it would need to be restated in a way that allows for experience to refute it.
Popper also attacked “historicism” – the idea that history follows fixed laws or trends leading to inevitable outcomes. He viewed this as pseudoscience or dangerous ideology that tempts state planners to control society. Such historicist doctrines are not falsifiable. For instance, the “Law of Increasing Technological Complexity” cannot be tested because it’s not a falsifiable hypothesis. These “laws” often become immune to falsifying evidence, with new evidence being interpreted through the lens of the theory. The example of Bertrand Russell’s chicken, which assumes daily feedings are a law until its head is chopped off, illustrates how a trend is not destiny. We tend to assume the worst that has happened is the worst that can happen, neglecting to prepare for extremes allowed by physics rather than just historical precedent. Applying the filter of falsifiability helps identify more robust theories. If a theory cannot be proven false, we can only determine its probability of being true.
First Principles Thinking
First principles thinking is presented as a powerful method for reverse-engineering complex situations and fostering creativity. It involves separating the underlying ideas or facts from assumptions built upon them, leaving only the essentials. Knowing these fundamental truths allows for the construction of new knowledge and solutions.
The Foundation of Knowledge
Rooted in ancient philosophy (Plato, Socrates, Aristotle, Descartes), first principles thinking seeks foundational knowledge that is non-reducible in a given context. It’s not about finding absolute, unchanging truths, but identifying the boundaries within which we must work. For example, in improving a refrigerator’s energy efficiency, the laws of thermodynamics can be treated as first principles. However, a physicist might delve deeper, breaking down the second law into its underlying principles and assumptions. The more we understand, the more we can challenge existing assumptions.
Techniques for Establishing First Principles
To cut through dogma and shared beliefs, two techniques are highlighted:
- Socratic Questioning: A disciplined questioning process used to establish truths, reveal underlying assumptions, and distinguish knowledge from ignorance. It systematically challenges assumptions by asking:
- Clarifying thinking: Why do I think this? What exactly do I think?
- Challenging assumptions: How do I know this is true? What if I thought the opposite?
- Looking for evidence: How can I back this up? What are the sources?
- Considering alternative perspectives: What might others think? How do I know I am correct?
- Examining consequences and implications: What if I am wrong? What are the consequences?
- Questioning original questions: Why did I think that? Was I correct? What conclusions can I draw?
This method slows down thinking, limits emotional responses, and helps build lasting understanding.
- The Five Whys: This method, rooted in children’s instinctive curiosity, involves repeatedly asking “why?” to systematically delve deeper into a statement or concept. The goal is to reach a “what” or “how” – a falsifiable fact. If the “whys” lead to “because I said so” or “it just is,” it indicates an assumption based on popular opinion or dogma, not a first principle. Both methods force us to confront our own ignorance, but are essential for long-term clarity. As Carl Sagan said, “Science is much more than a body of knowledge. It is a way of thinking.”
Blowing Past Inaccurate Assumptions
The discovery that bacteria (H. pylori), not stress, caused stomach ulcers is a prime example of first principles thinking in action. For decades, the “dogma of the sterile stomach” was accepted as a first principle, preventing doctors from looking for bacterial causes. Robin Warren and Barry Marshall challenged this assumption, systematically questioning it and seeking evidence. Their persistence, despite initial rejection from the scientific community, led to a Nobel Prize and revolutionary treatment for ulcers. This shows how ingrained dogmas, even if based on “because I said so,” can be overcome by identifying and challenging the actual first principles.
Incremental Innovation and Paradigm Shifts
First principles thinking helps us understand why things are successful or not, preventing blind copying of tactics. It’s crucial for both incremental improvements and radical innovation. Temple Grandin’s curved cattle chute is an example. While the curved chute itself is a tactic, Grandin’s first principle was reducing stress to animals. When later research showed straight chutes could also work, Grandin explained that the tactic could change if the underlying principle (stress reduction) was addressed.
This thinking can also lead to paradigm shifts. Scientists asking “what are the first principles of meat?” (taste, texture, smell, cooking use) discovered that being “part of an animal” was not a first principle. This led to lab-grown artificial meat, which replicates the core properties of meat, potentially eliminating the need for raising animals and addressing significant environmental and ethical concerns. As Harrington Emerson observed, “The man who grasps principles can successfully select his own methods. The man who tries methods, ignoring principles, is sure to have trouble.”
Thought Experiment
This chapter explores thought experiments as powerful “devices of the imagination used to investigate the nature of things.” They allow us to learn from mistakes, avoid future ones, evaluate consequences, re-examine history, and determine what we truly want and how to achieve it, even when physical experimentation is impossible.
The Power of Mental Simulation
The comparison between predicting a basketball game between LeBron James and Woody Allen versus LeBron James and Kevin Durant illustrates the core of a thought experiment. In the first case, the outcome is obvious due to stark differences, allowing for a confident “bet.” In the second, the similarity of skill makes it impossible to predict with certainty without actually seeing them play. The key is that in both cases, we simulate the contest in our minds.
A rigorous thought experiment, much like the scientific method, involves:
- Asking a question: What is the specific problem or scenario?
- Conducting background research: Gather necessary information about the elements involved.
- Constructing a hypothesis: Formulate a potential answer or outcome.
- Testing with (thought) experiments: Mentally run through scenarios, changing variables to observe potential influences.
- Analyzing outcomes and drawing conclusions: Evaluate the results of the mental simulations.
- Comparing to hypothesis and adjusting: Refine the hypothesis or ask new questions based on insights.
The ability to change variables endlessly in a thought experiment is its real power. It allows for estimating the full spectrum of possible outcomes, leading to a better appreciation of what can be influenced and what is reasonably expected.
Key Applications of Thought Experiments
The chapter highlights three main areas where thought experiments are immensely useful:
- Imagining Physical Impossibilities: Albert Einstein’s elevator thought experiment helped him formulate the general theory of relativity. By imagining being in a closed elevator, unable to distinguish between acceleration in space and gravity on Earth, he intuited that these forces were equivalent. This allowed him to define properties of physically impossible scenarios, providing enough information to test hypotheses mathematically. Common phrases like “if money were no object” or “if you had all the time in the world” are informal thought experiments, revealing what we truly value by removing constraints. The Trolley Experiment is another classic example used to explore ethical dilemmas where real-world experimentation is impossible, significantly advancing questions of morality.
- Re-Imagining History (Historical Counter-factuals and Semi-factuals): This involves asking “What if Y happened instead of X?” (e.g., What if Gavrilo Princip hadn’t shot Archduke Franz Ferdinand?). While popular, this application requires extreme caution because history is a chaotic system. Small changes can lead to vastly different outcomes, much like weather prediction. The unpredictability of chaotic systems makes definitive conclusions difficult. The goal is not to predict the exact past, but to understand the realistic relationships between events and the most likely effects of any one decision. By imagining multiple scenarios where an event (like WWI) could have occurred even without the specific trigger, we can gauge the actual impact of that trigger. This helps us understand that historical events are just “one realization of the historical process” among many possibilities.
- Intuiting the Non-Intuitive: Thought experiments help verify if our natural intuition is correct. John Rawls’s “veil of ignorance” thought experiment, from his Theory of Justice, asks how society would be structured if designers didn’t know their own future status (economic, gender, talents). This forces a more fair and equitable design, challenging initial intuitions about fairness. This can be applied to designing company policies (e.g., HR, parental leave) by considering them from an unknown role within the company.
Reducing the Role of Chance
The “Google stock on margin” example demonstrates how a thought experiment can reveal the limits of what we know and what we should attempt. While doubling money seems genius, mentally simulating scenarios where the stock falls 50% (leading to a margin call and ruin) highlights the inherent risks and the role of luck. This process helps us recognize when good outcomes are due to chance, prompting reflection on our decision-making process to reduce the role of chance in the future.
Supporting Idea: Necessity and Sufficiency
This section introduces the crucial distinction between necessary and sufficient conditions, highlighting a common mistake: assuming that meeting necessary conditions guarantees a desired outcome.
Understanding the Concepts
- Necessary Condition: A condition that must be present for an event or effect to occur. If the necessary condition is absent, the event cannot happen. For example, knowing how to write well is necessary to become a published author.
- Sufficient Condition: A condition that, if present, guarantees an event or effect will occur. If the sufficient condition is met, the event will happen. Knowing how to write well is not sufficient to become J.K. Rowling; many other factors (talent, luck, market timing) are also needed.
The gap between necessary and sufficient conditions often includes luck, chance, or other factors beyond direct control. Trying to achieve something complex, like becoming a Fortune 500 company, requires many necessary conditions (capital, hard work, intelligence) but none of them are sufficient on their own. Billionaire success, for example, requires all these plus significant luck. This explains why there’s no simple “recipe” for such success.
Real-World Applications
- Military Battles: Preparation (evaluating enemy, developing a plan, logistics) is necessary for winning a battle, but not sufficient. Many other unpredictable factors contribute to success.
- Professional Sports: Physical capability, time, and training are necessary for professional athletic success, but not sufficient. Many talented athletes never reach professional ranks due to competition, injuries, or lack of opportunity.
In mathematical terms, the set of necessary conditions is a subset of the sufficient conditions. The sufficient set is much larger and encompasses all factors needed for the outcome. Understanding this distinction helps avoid being misled by incomplete stories and prevents the mistaken attribution of success solely to necessary efforts, when other external factors are also at play.
Second-Order Thinking
This chapter introduces second-order thinking as a crucial mental model for making better decisions by looking beyond immediate consequences. While first-order thinking anticipates immediate results, second-order thinking considers the subsequent effects of those actions and thinks holistically. Failing to do so can lead to disaster, often resulting in “unintended consequences.”
The Law of Unintended Consequences
The concept is vividly illustrated by historical examples where well-intentioned actions led to negative ripple effects. The British government’s bounty on cobras in Delhi backfired because citizens began breeding snakes for the reward, worsening the problem. Similarly, adding traction to tires offers immediate safety benefits (first-order) but leads to worse gas mileage and more carbon emissions (second-order). “You can never merely do one thing,” as Garrett Hardin’s First Law of Ecology states. We operate in a complex web of interconnected relationships, where actions have far-reaching consequences that may not be immediately apparent.
The extensive use of antibiotics in livestock feed provides a contemporary example. The first-order effect is increased animal weight and profit. However, the second-order effect is the creation of antibiotic-resistant bacteria, which then enter our food chain, posing a serious public health threat. This outcome could have been anticipated by anyone with a basic understanding of biology and evolution, demonstrating the importance of looking beyond immediate gains. “When we try to pick out anything by itself, we find it hitched to everything else in the Universe,” noted John Muir.
Underlying Concepts and Practical Applications
Second-order thinking emphasizes two important concepts:
- Inclusion of subsequent effects: To understand how the world works, we must include second and subsequent effects in our analysis, observing the web of connections.
- Long-term vs. Short-term: It often means prioritizing long-term interests over immediate gains. The candy example highlights this: immediate pleasure (first-order) versus long-term health consequences (second-order).
Second-order thinking is not about predicting the future with certainty, but about considering likely consequences based on available information. Finding historical examples can be tricky, as outcomes alone don’t reveal the thinking process.
Cleopatra’s alliance with Caesar in 48 BC serves as a strong historical example of second-order thinking. Despite the immediate first-order negative effects (angering her brother, starting a civil war, risking assassination), Cleopatra likely foresaw the long-term benefits: with Caesar’s support, her reign had a much greater chance of success and stability. This decision, though causing short-term pain, yielded enormous long-term payoffs, as “the Alexandrian War gave Cleopatra everything she wanted. It cost her little.”
Second-order thinking is also crucial for constructing effective arguments. When persuading others, demonstrating that you have considered and addressed the second-order effects makes your argument more compelling. Mary Wollstonecraft’s argument for women’s education in A Vindication of the Rights of Woman is a classic example. Instead of just arguing for women’s rights (first-order), she explained the societal benefits: educated women would become better wives, mothers, and citizens (second-order effects). This shifted the conversation and laid the groundwork for feminism.
Avoiding the Slippery Slope Effect
A crucial caveat is to avoid the “Slippery Slope Effect”, which posits that a single action inevitably leads to a chain of disastrous consequences (A leads to B, C, D, E, F, etc.). As Garrett Hardin notes, this view “acts as though they think human beings are completely devoid of practical judgment.” While some actions can lead to bad outcomes, most situations have limits. Second-order thinking should focus on most likely effects and their most likely consequences, not every improbable possibility. Worrying about all possible effects can lead to analysis paralysis, preventing any action at all.
Probabilistic Thinking
This chapter highlights probabilistic thinking as a fundamental tool for improving decision accuracy in an unpredictable world. It involves estimating the likelihood of specific outcomes using mathematical and logical tools, allowing for more precise and effective decisions.
Why Probabilities Matter
The future is inherently unpredictable due to an infinitely complex set of factors and the compounding effect of even small errors in data. Since we lack perfect information, we cannot know with certainty if an event will happen. Probabilistic thinking allows us to estimate the future by generating realistic, useful probabilities, helping us navigate uncertainty. While human brains evolved heuristics for survival in simpler times, modern complex systems require a conscious layer of probability awareness to thrive.
Key Aspects of Probabilistic Thinking
Three important aspects are covered:
- Bayesian Thinking (Bayesian Updating): Developed by Thomas Bayes, this approach dictates that we should adjust probabilities when encountering new data, incorporating all relevant prior information. It’s about using “base rates” – outside information about past similar situations – to contextualize new information. For instance, a headline about “Violent Stabbings on the Rise” might cause alarm, but Bayesian thinking involves factoring in prior knowledge that violent crime rates have been declining for decades. If the rate doubles from 0.01% to 0.02%, the overall safety hasn’t been significantly compromised. Conversely, the steady, long-term increase in diabetes diagnoses in the US, when viewed through a Bayesian lens, indicates a genuinely worrisome trend. Priors are themselves probability estimates, and new information can reduce their probability of being true, leading to updates or replacement. It’s always a mistake not to ask: What are the relevant priors?
- Conditional Probability: Similar to Bayesian thinking, this concept emphasizes that outcomes of an event can be conditional on what preceded them. When using historical events to predict the future, one must be mindful of the surrounding conditions. For example, my choice of vanilla ice cream might be independent if all flavors are available, or dependent/conditional if chocolate is already gone. This means being careful to observe conditions preceding an event.
- Fat-Tailed Curves (vs. Bell Curves): Most people are familiar with the bell curve (normal distribution), where extreme events are predictable and deviations from the mean are capped (e.g., human height). However, many real-world phenomena follow fat-tailed curves, where there is no real cap on extreme events (e.g., wealth). While any single extreme event is still unlikely, the sheer number of possibilities in the tail means we cannot rely on common outcomes as representing the average. Crazy things are definitely going to happen, and we have no way of identifying when. Nassim Taleb highlights that small errors in measuring fat-tailed risks can lead to being off by orders of magnitude. The key is not to predict every scenario, but to position ourselves to survive or benefit from unpredictable futures by understanding that we operate in a volatile, fat-tailed world.
- Anti-fragility: Coined by Nassim Taleb, anti-fragility describes things that benefit from volatility and unpredictability, unlike fragile things (harmed by volatility) or robust things (neutral to volatility). The world is fundamentally unpredictable, and large events have disproportionate impacts. Rather than trying to predict, we should prepare. This involves:
- Upside optionality: Seeking situations with good odds of opportunities (e.g., attending a networking party). The worst is often nothing, but the upside can be significant.
- Learning how to fail properly: Never take a risk that can completely destroy you. Develop the resilience to learn from failures and start again. Failure carries the gift of learning, making one less vulnerable to volatility.
Asymmetries (Metaprobability)
This concept, or “metaprobability,” refers to the probability that our probability estimates themselves are any good. There’s a common asymmetry where people’s probability estimates are skewed towards over-optimism. Investors often aim for unrealistic returns, and people consistently underestimate traffic delays. Few people aim low and achieve much higher; most aim high and fall short.
The spy world exemplifies successful probabilistic thinking in high-stakes situations. Vera Atkins, second in command of the British Special Operations Executive during WWII, had to make life-or-death decisions based on unreliable information. She assigned probabilities to factors when recruiting spies (e.g., confidence, language skills) and deploying them, understanding that intelligence is not evidence and that risks are high. The ultimate success rate (100 out of 400 agents captured or killed) shows that even with meticulous probabilistic thinking, 100% success is not guaranteed.
The chapter concludes by highlighting that probabilistic thinking helps us roughly identify what matters, gauge odds, check assumptions, and make decisions with a higher level of certainty in complex, unpredictable situations. It is an extremely useful tool for strategizing based on the most likely future outcomes. Insurance companies are masters of this, pricing policies for highly specific and seemingly improbable events by meticulously calculating probabilities based on available data and carefully assessing relevant factors.
Supporting Idea: Causation vs. Correlation
This section addresses the widespread confusion between causation and correlation, which frequently leads to inaccurate assumptions and poor decisions. We often observe two phenomena occurring together (correlation) and mistakenly conclude that one causes the other (causation).
Defining the Terms
- Correlation: A statistical relationship between two or more variables, meaning they tend to change together. This relationship can be positive (both increase or decrease together), negative (one increases as the other decreases), or non-existent.
- No Correlation: A correlation coefficient close to 0 indicates no shared factors, like bottled water consumption and suicide rates.
- Perfect Correlation: A coefficient of 1 (or -1) means the measures are solely dependent on the same factor, like Celsius and Fahrenheit temperatures.
- Weak to Moderate Correlation: Many human science phenomena show some shared explanatory power but also other influencing factors, like height and weight.
- Causation: A relationship where one event or action directly leads to another event or outcome.
- A correlation does not imply causation. For example, a study might show a correlation between high parental alcohol consumption and low academic success in children. It’s tempting to conclude that parental drinking causes poor academic outcomes. However, it’s also possible that having kids who do poorly in school causes parents to drink more, or that a third, unmeasured factor influences both. Inverting the relationship (asking if the effect could cause the presumed cause) can help sort this out.
The Problem of Regression to the Mean
When correlation is imperfect, extremes will soften over time, a phenomenon called regression to the mean. The best will appear to get worse, and the worst will appear to get better, regardless of any intervention. This often leads to mistakenly attributing a specific policy or treatment as the cause of an effect that would have happened anyway.
Daniel Kahneman’s example of depressed children improving after drinking an “energy drink” illustrates this. Depressed children are an extreme group, and they will naturally improve somewhat over time due to regression to the mean, even with no intervention. Without a control group (which also experiences regression to the mean), it’s impossible to tell if the “treatment” had a real effect beyond natural variation. In real-life situations where a control group isn’t possible (e.g., evaluating individual performance), disentangling regression effects can be very difficult.
Inversion
Inversion is a powerful thinking tool that involves approaching a situation from the opposite end of the natural starting point, effectively “turning it upside down.” Instead of always thinking forward, inversion encourages thinking backward to identify and remove obstacles to success, rather than directly seeking brilliance.
Two Approaches to Inversion
- Assume the Opposite is True: Start by assuming what you’re trying to prove is either true or false, and then logically determine what else would necessarily have to be true. This was the method of 19th-century mathematician Carl Jacobi (“invert, always invert”), who solved difficult problems by starting with the endpoint. Hippasus’s attempts to derive the square root of 2 by trying to prove what it couldn’t be led to the discovery of irrational numbers. The Sherlock Holmes case “A Scandal in Bohemia” beautifully illustrates this: Holmes inverted the problem of finding a compromising photograph by assuming its existence and then deducing where it would logically be hidden for quick retrieval, staging a false fire to reveal its location.
- Identify What You Want to Avoid: Instead of aiming directly for a positive goal, think deeply about what you want to avoid and then see what options remain. This is a common strategy championed by Charlie Munger: “All I want to know is where I’m going to die so I’ll never go there.”
Real-World Applications of Inversion
- Selling Cigarettes to Women (Edward Bernays): In the 1920s, the American Tobacco Company wanted to sell Lucky Strike cigarettes to women despite social taboos. Edward Bernays didn’t ask “How do I sell more cigarettes to women?” He inverted: “If women bought and smoked cigarettes, what else would have to be true?” This led him to reshape American society and culture, linking smoking with women’s emancipation (“torches of freedom”) and making cigarettes ubiquitous in social settings and homes. By changing the environment and perception, selling cigarettes became easy.
- Index Funds (John Bogle, Vanguard): Instead of trying to “beat the market” (a difficult and often losing proposition), John Bogle inverted the question: “How can we help investors minimize losses to fees and poor money manager selection?” This led to the creation of index funds, which simply track the market, minimizing costs and maximizing long-term wealth by avoiding common pitfalls. This principle applies to personal finance: instead of just aiming to “get rich,” first focus on “avoiding being poor” by eliminating behaviors that erode wealth (e.g., spending more than you make, high-interest debt, delayed saving).
- Force Field Analysis (Kurt Lewin): Psychologist Kurt Lewin’s force field analysis recognizes that managing change involves both augmenting forces that support an objective and reducing or eliminating forces that impede it. Most people focus only on the former (e.g., new training). Inversion means considering not only what to do to solve a problem, but also what would make it worse – and then avoiding or eliminating those factors.
- Reducing Mortality in Hospitals (Florence Nightingale): During the Crimean War, Florence Nightingale used statistics to understand what was killing British soldiers in military hospitals. By inverting the problem from “how do we fix this” to “how do we stop it from happening in the first place,” she identified poor sanitation as the leading cause of death. Her famous polar-area chart visually demonstrated this, leading to sanitary reforms that drastically reduced mortality rates. Her work exemplifies using inversion to prevent problems.
Caveats to Inversion
While powerful, inversion must be applied with judgment. It’s not about avoiding all action due to “analysis paralysis” from the Slippery Slope Effect. Instead, it’s about evaluating the most likely effects and consequences and understanding the typical results of actions. Inversion is about making the complicated simple, leading to innovation, and ensuring that by avoiding stupidity, we move closer to brilliance.
Occam’s Razor
Occam’s Razor is a classical principle of logic and problem-solving that states simpler explanations are more likely to be true than complicated ones. It’s a powerful tool for avoiding unnecessary complexity, encouraging decisions based on explanations with the fewest moving parts, which are easier to falsify, understand, and generally more likely to be correct.
The Essence of Simplicity
Named after William of Ockham (“a plurality is not to be posited without necessity”), the razor is a guiding principle, not an absolute law. If two competing explanations have equal explanatory power, the simpler one is preferred. This principle was also noted by David Hume, who argued for skepticism towards miracles. Hume and Carl Sagan suggested that the simplest explanation for a “miracle” is usually that the witness is mistaken, or that the phenomenon is a natural occurrence not yet understood by science, rather than a supernatural event. “Extraordinary claims require extraordinary proof.”
Examples of Occam’s Razor in Action
- Dark Matter: In the 1970s, astronomer Vera Rubin observed that galaxies were rotating “all wrong” – stars at the edges moved as fast as those near the center, violating Newton’s Laws. The simplest explanation, first theorized by Fritz Zwicky in 1933, was “dark matter” – an invisible mass influencing gravitational behavior. Despite never being directly observed, dark matter remains the simplest explanation for observed galactic phenomena. The mathematical reason simpler explanations are often correct is that they introduce fewer variables; if each variable has a small chance of error, a complex explanation with many variables has a much higher probability of being wrong. However, Rubin herself considered that if dark matter continued to be elusive, a modification to our understanding of gravity might be a simpler explanation.
- Increasing Efficiency through Simplicity:
- LA’s Ivanhoe Reservoir: Faced with preventing bromate (a carcinogen) formation when chlorine mixed with bromide and sunlight in their drinking water, the DWP brainstormed complex solutions like tarps or domes. A biologist’s simple suggestion of “bird balls” (UV-deflecting floating balls used by airports) proved the most effective and cost-efficient solution, requiring no construction or maintenance.
- Bengal Tigers: When Bengal tigers killed villagers, a student observed that tigers only attacked when they thought they were unseen. The simple solution: human face masks worn on the back of the head. This remarkably effective deterrent stemmed from a simple observation and avoided complex, ineffective methods.
- Occam’s Razor in Medicine: The saying “When you hear hoofbeats, think horses, not zebras” encapsulates Occam’s Razor in medical diagnosis. If a patient presents with flu-like symptoms, the simplest explanation is the flu, not Ebola, given the relative probabilities. This avoids unnecessary panic and costly misdiagnoses. For patients, it helps counteract hypochondria by encouraging them to consider the most likely explanation for symptoms.
- Leadership (Louis Gerstner at IBM): When Louis Gerstner took over struggling IBM in the 1990s, many called for a grand vision. Gerstner famously stated, “the last thing IBM needs right now is a vision.” He applied Occam’s Razor, focusing on simple, tough-minded business execution: serving customers, competing for current business, and focusing on profitable areas. This direct, no-frills approach brought IBM back from the brink, proving that sometimes the simplest solution is the most effective.
Caveats to Occam’s Razor
It’s important not to apply Occam’s Razor to create artificial simplicity where none exists. Some things, like pyramid schemes or human flight, are genuinely complex and cannot be reduced further without losing accuracy. An explanation can only be simplified to the extent that it still provides an accurate understanding and performs its necessary functions. The goal is to focus on simplicity when others are focused on complexity, thereby conserving time and energy.
Hanlon’s Razor
Hanlon’s Razor is a powerful mental model that advises us to “never attribute to malice that which is more easily explained by stupidity.” In a complex world, this principle helps us avoid paranoia and ideological thinking, encouraging us to look for options rather than assuming ill intent when bad results occur. It reminds us that mistakes, ignorance, and laziness are often more likely explanations than deliberate wrongdoing.
The Essence of Unintentionality
The razor suggests that when something negative happens, the simplest explanation is usually an absence of intent. Road rage is a perfect example: assuming another driver cuts you off maliciously implies complex, risky planning on their part. The simpler, more likely explanation is that they simply didn’t see you or made a mistake.
The “Linda problem” by Daniel Kahneman and Amos Tversky demonstrates why we need Hanlon’s Razor. Our minds are deeply affected by vivid, available evidence, leading us to over-conclude and violate simple logic. The experiment showed that people were more likely to believe Linda was a “bank teller and active in the feminist movement” than simply a “bank teller,” even though the latter is statistically more probable. This “Fallacy of Conjunction” illustrates our tendency to package unrelated factors if they fit an existing belief, leading us to assume intentionality when an issue seems “wrong,” even if it’s unintentional.
Assuming malice, rather than stupidity or error, fosters paranoia and places oneself at the center of everyone else’s world. This is a self-centered approach that hinders effective problem-solving. For every act of malice, there is likely far more ignorance, stupidity, and laziness.
Historical Applications and Impact
- The Fall of the Roman Empire: In 408 AD, Western Roman Emperor Honorius assumed malicious intentions on the part of his best general, Stilicho, leading to Stilicho’s execution. Stilicho’s actions, such as advising against fighting Alaric (leader of the Visigoths), were misinterpreted as a bid for power. Without Stilicho’s military prowess and influence, the Empire suffered a military disaster, leading to the sacking of Rome two years later and contributing to the Empire’s collapse. Honorius’s failure to apply Hanlon’s Razor had catastrophic consequences.
- The Man Who Saved the World (Vasili Arkhipov): On October 27, 1962, during the Cuban Missile Crisis, Soviet submarine officer Vasili Arkhipov single-handedly prevented nuclear war by remaining calm and refusing to assume malice. When American destroyers dropped blank depth charges to force Soviet subs to surface, the captain of Arkhipov’s nuclear-armed sub believed war had broken out and wanted to launch a nuclear torpedo. However, Arkhipov, one of three officers required to authorize the launch, insisted on surfacing to contact Moscow, assuming the explosions were a mistake rather than an act of war. His adherence to Hanlon’s Razor saved billions of lives.
Hanlon’s Razor empowers us by offering more realistic and effective options for remedying bad situations. When we assume malice, we become defensive, narrowing our vision to dealing with the perceived threat. By recognizing that mistakes, laziness, or bad incentives are more common causes, we can approach problems with a more open mind, identifying opportunities for resolution.
The Devil Fallacy and Conclusion
Robert Heinlein’s “Devil Fallacy” describes the tendency to attribute negative conditions to “villainy” when they “simply result from stupidity.” Hanlon’s Razor is a crucial tool for overcoming this fallacy. While it’s important not to overthink the model, it encourages us to prioritize the simplest explanation, particularly those requiring the least energy (ignorance, laziness), over active malice. Ultimately, Hanlon’s Razor reminds us that people are human and make mistakes. Recognizing this truth makes our lives easier, better, and more effective.
Key Takeaways
“The Great Mental Models: General Thinking Concepts” fundamentally reshapes how you approach problems and decisions by providing a robust framework for understanding the world. By embracing these core mental models, you gain the ability to strip away superficial complexity, uncover hidden realities, and anticipate consequences, leading to more effective and fulfilling outcomes in all areas of your life.
The core lessons readers should remember are:
- Your mental models define your reality: The quality of your thinking directly depends on the diverse and accurate mental models you possess.
- The map is not the territory: Always remember that your models and abstractions are simplifications, not reality itself. Constantly update them based on real-world feedback.
- Know your circle of competence: Understand what you genuinely know and, more importantly, what you don’t. Operate within your strengths and know when to seek expert help.
- Reason from first principles: Break down complex problems to their fundamental truths, challenging assumptions to unleash creativity and innovation.
- Employ thought experiments: Use mental simulations to explore impossible scenarios, re-imagine history, and intuit non-intuitive outcomes, thereby understanding cause and effect more deeply.
- Think second-order: Go beyond immediate consequences; consider the effects of the effects to avoid unintended negative outcomes and prioritize long-term interests.
- Embrace probabilistic thinking: Estimate the likelihood of outcomes using Bayesian updating, understand fat-tailed distributions, and be aware of your own biases in probability estimates.
- Practice inversion: Instead of just striving for success, identify and avoid what leads to failure. Sometimes, eliminating obstacles is more powerful than direct pursuit of a goal.
- Apply Occam’s Razor: Prefer simpler explanations with fewer assumptions. This saves time and energy and is often more likely to be correct, but don’t force artificial simplicity on genuinely complex issues.
- Leverage Hanlon’s Razor: Attribute bad outcomes to stupidity or error before malice. This reduces paranoia, fosters a more realistic outlook, and opens pathways for effective solutions.
Next actions you should take immediately:
- Start a “mental models journal”: Begin recording situations where you observe these models at work in the world, or where you could have applied them to make a better decision. Note your hypotheses, actions, and results.
- Actively seek external feedback: Identify trusted individuals who can offer honest perspectives on your decision-making and areas for improvement.
- Invert a current problem: Take a challenge you’re facing and consider how you might cause the worst possible outcome. Then, identify behaviors or conditions to avoid.
Reflection prompts:
- How have you, in the past, mistaken a “map” for the “territory,” and what were the consequences?
- In which areas of your life is your “circle of competence” strong, and where are you truly a “stranger”? How will you adjust your approach in those “stranger” areas?
- What is one complex problem you’re currently facing? Can you break it down to its “first principles” to reveal a simpler solution or a new approach?





Leave a Reply