
The Design of Everyday Things: Complete Summary of Don Norman’s Human-Centered Design for Usable Products
Introduction: What This Book Is About
Don Norman’s “The Design of Everyday Things” (DOET) is a seminal work that fundamentally reshapes how we perceive and interact with the world around us. Originally published as “The Psychology of Everyday Things” (POET), this revised and expanded edition continues to illuminate the principles of good design and expose the frustrations of poor design. Norman, a cognitive scientist and design expert, argues that the problems we encounter with doors, light switches, and complex electronic devices are not due to human incompetence but to flawed design that misunderstands human psychology.
This book serves as a foundational guide for anyone interested in why some products are a joy to use while others are a constant source of irritation. It introduces core concepts like affordances, signifiers, mappings, feedback, and conceptual models, demonstrating how their proper application can transform frustrating experiences into seamless, intuitive interactions. Norman advocates for a human-centered design (HCD) approach, emphasizing that design should prioritize human needs, capabilities, and behaviors above all else.
Readers will learn to become astute observers of design, recognizing both the invisible brilliance of good design and the screaming inadequacies of bad design. The book provides the tools to not only identify design failures but also to understand their root causes and propose effective solutions. Whether you are a professional designer, engineer, business leader, or simply an everyday consumer, this summary will offer comprehensive coverage of all key insights, empowering you to demand and create better, more enjoyable products that genuinely enhance daily life.
Chapter 1: The Psychopathology of Everyday Things
This chapter delves into the pervasive issue of poorly designed everyday objects, highlighting how they lead to confusion, frustration, and even danger. Norman introduces core principles for understanding and improving the design of interactions between people and technology.
The Pervasiveness of Bad Design
Don Norman begins by confessing his personal struggles with seemingly simple objects like doors, which he famously calls “Norman doors” due to their confusing operation. He points out that if a device as basic as a door requires signs to indicate whether to push, pull, or slide, it represents a fundamental failure of design. Norman’s personal experience of being trapped in a post office doorway due to invisible hinges on swinging glass doors illustrates how aesthetically pleasing designs can completely fail in usability when discoverability is compromised.
Understanding Good Design: Discoverability and Understanding
Good design is characterized by discoverability and understanding. Discoverability means that users can figure out what actions are possible and where and how to perform them. Understanding means knowing what the product is for and how its controls and settings work. When these elements are absent, users often resort to memorizing a few fixed settings for complex devices, or blame themselves for their inability to use simple ones. The example of a fancy Italian washer-dryer with too many confusing controls highlights how complexity without clarity leads to frustration and underutilization of features.
The Complexity of Modern Devices and the Role of Design
All artificial things are designed, from furniture layouts to intricate electronic devices. Norman emphasizes that the field of design is vast, encompassing industrial design, interaction design, and experience design. Industrial design focuses on form, material, and optimizing function and appearance for user and manufacturer benefit. Interaction design is about how people interact with technology, enhancing understanding of what can be done and what is happening. Experience design emphasizes the overall quality and enjoyment of the total user experience. Good design, in any of these areas, creates pleasurable and usable products, while bad design leads to frustration and forces users to adapt to the machine rather than the other way around.
Blaming the Machine, Not the User
Norman strongly argues that blame for difficulties should be cast upon the machines and their design, not the operators. Engineers, often the primary designers, tend to think logically and assume users will also behave logically or read instructions. However, humans are complex, imaginative, and prone to error. The Three Mile Island nuclear power plant accident, initially attributed to “human error,” was later found to be primarily a design fault of the control room. Norman stresses that machines should be designed assuming humans will make errors, making the machine’s duty to understand people, not the other way around.
Introducing Human-Centered Design (HCD)
The solution to pervasive design problems is human-centered design (HCD). This approach prioritizes human needs, capabilities, and behavior in the design process. Good design requires understanding both psychology and technology, and it relies heavily on good communication from machine to person. Designers should focus on scenarios where things go wrong, as well as when they go right, because well-designed error handling can transform a negative experience into a positive one, fostering a sense of control and satisfaction. HCD is a philosophy and set of procedures that integrate deeply with industrial, interaction, and experience design.
Fundamental Principles of Interaction: Affordances
Great designers create pleasurable experiences, which are crucial for how users remember their interactions. Norman introduces five fundamental psychological concepts for discoverability and understanding: affordances, signifiers, constraints, mappings, and feedback, with conceptual models providing true understanding. Affordances refer to the relationship between a physical object’s properties and an agent’s capabilities, determining how an object could possibly be used. A chair affords sitting, for example, but its ability to be lifted depends on the strength of the person. Norman credits J. J. Gibson for the concept, emphasizing that affordances exist whether perceived or not, but visible affordances are crucial for strong clues to operation.
Fundamental Principles of Interaction: Signifiers
While affordances define possible actions, signifiers communicate where an action should take place. Norman clarifies the distinction, noting that “affordance” was often misused by designers to mean “signifier.” Signifiers can be deliberate, like a “PUSH” sign on a door, or unintentional, like a visible trail indicating a path. They are perceivable indicators that communicate appropriate behavior. Good design requires signifiers to effectively communicate purpose, structure, and operation. Problem doors often lack clear signifiers, forcing trial-and-error. Norman gives the example of a hotel sink stopper that had no visible signifier, requiring the user to push down on it, which was counterintuitive. Signifiers are more critical for designers than affordances because they directly address how to use the design.
Fundamental Principles of Interaction: Mapping
Mapping is the relationship between controls and their effects. Natural mapping uses spatial analogies, like turning a steering wheel clockwise to turn a car right, leading to immediate understanding. This principle is vital in the design and layout of controls and displays. Good mapping relies on understandable conceptual models of how controls affect a system. Norman highlights an excellent example: automobile seat adjustment controls shaped like the seat itself, where moving a part of the control directly corresponds to moving the analogous part of the seat. Grouping related controls and placing them close to the controlled items are also principles derived from Gestalt psychology for effective mapping.
Fundamental Principles of Interaction: Feedback
Feedback communicates the results of an action, providing crucial information about whether a system is working on a request. Norman illustrates the lack of feedback with people repeatedly pushing elevator buttons or pedestrian crossing buttons. Feedback must be immediate, as even a tenth-of-a-second delay can be disconcerting. It also needs to be informative, avoiding generic beeps or flashes that convey little useful data. Too much feedback can be annoying or dangerous, leading users to ignore or disable warnings. Poor feedback design often results from cost-saving measures, using simple lights or sounds for multiple types of information. Effective feedback is planned, unobtrusive for unimportant information, and attention-grabbing for critical signals.
Fundamental Principles of Interaction: Conceptual Models
A conceptual model is a simplified explanation of how something works, valuable for understanding, predicting behavior, and troubleshooting. It doesn’t need to be complete or perfectly accurate, just useful. Computer file, folder, and icon displays, for instance, create effective conceptual models for users, even if the underlying reality is more complex. Mental models are the conceptual models users form in their minds. Norman’s personal struggle with his old refrigerator’s temperature controls exemplifies a false conceptual model caused by misleading labels. The two controls (freezer and refrigerator) suggested independent compartments, when in reality there was only one cooling unit and one control adjusted the thermostat while the other apportioned cold air, leading to frustration and inability to set temperatures correctly.
The System Image
The system image comprises all the information available to users about a product: its physical appearance, past experiences with similar items, sales literature, advertisements, websites, and manuals. It is the primary means by which designers communicate their conceptual model to users. If the system image is incoherent or inappropriate, users cannot easily understand or use the device, as seen with Norman’s refrigerator. A good conceptual model, communicated through a clear system image, is crucial for understandable and enjoyable products, guiding users even when things go wrong.
The Paradox of Technology
Technology offers the potential for ease and enjoyment but often introduces added complexity and frustration. The evolution of the wristwatch illustrates this: from a simple time-telling device with an intuitive stem control to modern digital watches packed with numerous functions (stopwatch, alarm, multiple time zones, etc.) controlled by multiple, context-sensitive buttons. While technology enables more features, it also makes devices harder to learn and use, creating the paradox of technology. The proliferation of “smart screens” that merge phones, watches, and computers further complicates interaction, posing a challenge for designers to create intuitive controls without physical buttons.
The Design Challenge
Good design is inherently difficult, requiring the cooperative efforts of multiple disciplines, including designers, engineers, manufacturers, marketers, and service personnel. Each discipline has different goals and priorities (e.g., usability, aesthetics, cost, reliability, market appeal), which often conflict. The challenge is to satisfy all these requirements while maintaining focus on the user. Norman emphasizes that successful products must not only be usable but also marketable and profitable. The ultimate goal is to produce great products that customers love, which demands a holistic approach to design that considers the entire product ecosystem and aligns the needs of all stakeholders.
Chapter 2: The Psychology of Everyday Actions
This chapter explores the fundamental psychological processes behind human actions, detailing the “gulfs” users face and introducing the “seven stages of action” to guide effective design. It also integrates the role of emotion and different levels of cognitive processing.
How People Do Things: The Gulfs of Execution and Evaluation
When people use a device, they encounter two critical barriers: the Gulf of Execution and the Gulf of Evaluation. The Gulf of Execution refers to the challenge of figuring out how to operate a device and what actions are possible. It’s the gap between the user’s intention and the actions they can perform. The Gulf of Evaluation is the challenge of determining what happened after an action and whether the results match their expectations. It’s the gap between the device’s state and the user’s interpretation. Norman’s landlady struggling with a stuck filing cabinet drawer exemplifies these gulfs: she had a clear goal but didn’t know how to execute the action when it failed, and then couldn’t interpret why it didn’t open. Designers must help users bridge both gulfs through effective use of signifiers, constraints, mappings, conceptual models, and feedback.
The Seven Stages of Action
Human action can be broken down into seven stages, forming a cycle that helps understand how people interact with devices:
- Forming the Goal: The user decides what they want to achieve (e.g., “get more light”).
- Forming the Intention: The user decides on a specific course of action (e.g., “turn on the nearby lamp”).
- Specifying the Action Sequence: The user determines the sequence of physical movements needed (e.g., “reach for the switch, flick it up”).
- Executing the Action: The user performs the physical actions.
- Perceiving the State of the World: The user observes what happened (e.g., “the light got brighter”).
- Interpreting the Perception: The user makes sense of the observation (e.g., “the lamp turned on”).
- Comparing the Outcome with the Goal: The user evaluates if the action achieved the desired goal (e.g., “yes, there’s enough light to read”).
This cycle is often subconscious for skilled behaviors (like driving), becoming conscious only when difficulties arise or in novel situations. The cycle can be goal-driven (starting from forming a goal) or event-driven/data-driven (starting from perceiving a change in the world). Norman also introduces opportunistic actions, where behavior takes advantage of circumstances rather than extensive planning.
Human Thought: Mostly Subconscious
Most human behavior results from subconscious processes, operating without conscious awareness. We often perform complex actions, like wiggling a specific finger or answering factual questions, without understanding the intricate neural commands involved. While conscious attention is crucial for initial learning, continued practice leads to “overlearning,” where skills become effortless and automatic. This subconscious efficiency allows us to multitask (e.g., walk and talk). However, it also means we often don’t know why we do what we do, and our conscious mind may construct post-hoc justifications for subconscious decisions. This hidden nature of thought means that many common beliefs about human behavior are often incorrect.
Human Cognition and Emotion
Norman asserts that cognition and emotion are tightly intertwined; they cannot be separated. Cognition helps us make sense of the world, while emotion assigns value and determines whether a situation is good or bad, safe or threatening. He proposes a three-level model of processing:
- Visceral Level: The most basic and subconscious level, responsible for immediate, automatic responses (e.g., fear, attraction, repulsion). It’s driven by our basic biology and dictates initial aesthetic judgments (e.g., a pleasant sound or a jarring scratch). Designers leverage this level through aesthetics and sensory appeal.
- Behavioral Level: The home of learned skills, triggered by matching situations to patterns. Actions at this level are largely subconscious but are associated with expectations. Positive outcomes lead to positive affective responses (satisfaction, relief), while negative outcomes lead to frustration or anger. This level is crucial for feedback and the feeling of control.
- Reflective Level: The conscious level where deep understanding, reasoning, and decision-making occur. It’s often “looking back” over events, assigning causality, and making future predictions, leading to higher-level emotions like guilt, pride, blame, or praise. Reflective memories influence our long-term judgments and recommendations of products, sometimes overriding immediate visceral or behavioral experiences.
The Seven Stages of Action and the Three Levels of Processing
The seven stages of action align with the three levels of processing:
- Visceral responses are active at the perceive and perform stages, influencing immediate sensing and motor actions.
- Behavioral processing is central to the specify, perform, perceive, and interpret stages, involving learned skills, expectations, and feedback interpretation.
- Reflective processing is involved in forming goals, intentions, and plans, and the final comparison of outcomes with goals, leading to conscious evaluation and emotional attribution.
This interplay means different emotions arise at different stages; for example, hope and joy at the behavioral level (driven by expectations), and satisfaction or pride at the reflective level (from achieving goals). The “flow” state, described by Mihaly Csikszentmihalyi, is a powerful behavioral emotion where challenge slightly exceeds skill, leading to immersive engagement.
People as Storytellers
Humans are inherently storytellers, constantly seeking causes and explanations for events, even forming them from fragmentary or erroneous evidence. These conceptual models (folk theories) are crucial for understanding experiences, predicting outcomes, and handling unexpected events. Norman’s refrigerator example, where users develop a false “valve” theory of the thermostat, illustrates how inadequate information leads to erroneous conceptual models and inappropriate actions, wasting energy and causing frustration. This highlights the importance of providing accurate and interpretable conceptual models in design to prevent misattributions and ensure correct usage.
Blaming the Wrong Things
People tend to attribute causes based on their own conceptual models, often blaming themselves or others incorrectly. When a device fails, users often think, “I’m being stupid,” especially if the task seems simple. This self-blame creates a “conspiracy of silence” where widespread design flaws go unreported. Norman contrasts this with the typical attribution error: we blame our own misfortunes on the environment but others’ misfortunes on their personalities. He recounts an anecdote of office workers blaming themselves for a computer error that was actually a design flaw, preventing the problem from being fixed.
Falsely Blaming Yourself and Learned Helplessness
The tendency to falsely blame oneself for difficulties with everyday objects can lead to learned helplessness, where repeated failure causes individuals to believe a task is impossible for them, leading to cessation of effort and even depression. Norman suggests that technology and even mathematics education can induce “taught helplessness” by not providing adequate support or understanding. He advocates for positive psychology, viewing “failures” as “learning experiences”—a core principle for scientists and design firms like IDEO (“Fail often, fail fast”). Designers should never blame users, but rather see user difficulties as signifiers for improvement, providing help and guidance instead of error messages, and making actions easily reversible.
The Seven Stages of Action: Seven Fundamental Design Principles
The seven stages of action provide a basic checklist for design. Each stage requires specific design strategies to bridge the Gulfs of Execution and Evaluation:
- Discoverability: It must be possible to determine what actions are possible and the device’s current state.
- Feedback: Provide full and continuous information about action results and system state, making new states easily determinable.
- Conceptual Model: The design should project all necessary information to create a good conceptual model, fostering understanding and control.
- Affordances: Proper affordances must exist to make desired actions physically possible.
- Signifiers: Effective use of signifiers ensures discoverability and clear communication of feedback.
- Mappings: Relationships between controls and actions should follow good mapping principles (spatial layout, temporal contiguity).
- Constraints: Physical, logical, semantic, and cultural constraints guide actions and ease interpretation.
These principles combine feedforward (information for execution) and feedback (information about results) to ensure the system is intelligible and matches human needs. Norman encourages readers to analyze design failures by identifying which stage of action is deficient and which design principles are violated, and then to think about how to improve them.
Chapter 3: Knowledge in the Head and in the World
This chapter explores how people leverage both internal knowledge (in their heads) and external information (in the world) to perform tasks, even with imprecise understanding. It also delves into memory systems and the cultural aspects of natural mappings.
Precise Behavior from Imprecise Knowledge
Norman highlights that precise behavior often emerges from imprecise knowledge because knowledge is a combination of what’s in our heads and what’s available in the world. People can use currency accurately despite not remembering its exact appearance because they only need enough information to distinguish between denominations. This phenomenon relies on four reasons:
- Knowledge is both in the head and in the world: Much needed information is external, reducing the need for internal memorization.
- Great precision is not required: Only sufficient knowledge to distinguish appropriate choices is necessary.
- Natural constraints exist in the world: Physical properties limit possible actions, guiding behavior without explicit knowledge.
- Knowledge of cultural constraints and conventions exists in the head: Learned restrictions narrow down likely actions.
This distribution of knowledge allows people to function effectively with minimal internal learning, even in novel or confusing situations.
Knowledge Is in the World: Declarative vs. Procedural Knowledge
Knowledge in the world refers to information readily available in the environment, such as signifiers, physical constraints, and natural mappings. This external knowledge significantly reduces the burden on human memory. For instance, keyboard labels allow non-typists to “hunt and peck” without memorizing key locations, though full internal knowledge (from practice) is faster.
People use two types of knowledge:
- Declarative knowledge (“knowledge of”): Facts and rules (e.g., “Stop at red traffic lights”). This is easy to write down and teach.
- Procedural knowledge (“knowledge how”): Skills and procedures (e.g., riding a bicycle, playing an instrument). This is largely subconscious, difficult to articulate, and best learned through practice.
Designers can aid users by embedding knowledge directly into the device or environment, making tasks easier and reducing the need for extensive memorization, as exemplified by organized workspaces and external notes.
When Precision Is Unexpectedly Required
Problems arise when the environment changes and previously sufficient imprecise knowledge becomes inadequate. Norman cites the examples of the Susan B. Anthony dollar coin in the US, the British one-pound coin, and the French ten-franc coin, all of which caused confusion because their size and weight were too similar to existing coins of different values. Users had developed partial descriptions in memory, sufficient for the old currency, but not precise enough to distinguish the new ones. This highlights that discrimination relies on distinguishing features, and if those features change or new items are introduced that violate established discrimination rules, confusion ensues. The contrast with US paper money (all the same size, so users rely on numbers/images) further illustrates how learned discrimination strategies can cause issues when applied to different contexts (e.g., Europeans confusing US bills due to lack of color/size cues).
Constraints Simplify Memory
Constraints act as powerful tools for design by limiting the set of possible actions, thereby simplifying memory requirements. The rigorous constraints of poetry (rhyme, rhythm, meter) allow bards to “re-create” epic poems thousands of lines long, seemingly “word for word,” but actually through a flexible formula. Similarly, assembling a mechanical device with ten parts seems daunting (3.5 million permutations), but physical constraints (parts fit only in certain places), cultural constraints (screws tighten clockwise), and logical constraints (washers go before nuts) dramatically reduce the possibilities, making reassembly manageable. Constraints reduce the amount of information that must be learned or remembered, guiding behavior.
Memory Is Knowledge in the Head
Knowledge in the head refers to internal memory, which can be vulnerable, as demonstrated by the “Ali Baba and the Forty Thieves” story, where Kasim forgot the magic phrase “Open Simsim!” under pressure. This illustrates the difficulty of remembering arbitrary things when there’s no inherent meaning or structure. Norman critiques modern security systems that require numerous complex passwords, forcing users to resort to insecure practices like writing them down. He argues that current security requirements often fail to consider human cognitive abilities, making systems less secure in practice. The example of Google employees using a brick to prop open a secure door highlights how overly rigid security measures can be counterproductive, leading to unintended vulnerabilities.
The Structure of Memory: Short-Term (Working) Memory
Psychologists distinguish between short-term memory (STM), or working memory, and long-term memory (LTM). STM retains immediate experiences and currently active information, but its capacity is severely limited (around 3-5 items for practical purposes). STM is fragile; distractions cause information to disappear. This has critical design implications: systems should avoid presenting critical information that quickly disappears or requires users to remember multiple steps (e.g., error messages that vanish). Norman gives an example of nurses writing critical patient information on their hands because electronic medical record systems log them out too quickly, negating the benefits of digital records. Designing for multiple sensory modalities (e.g., auditory warnings in cars for visual tasks) can mitigate STM interference.
The Structure of Memory: Long-Term Memory
Long-term memory (LTM) is memory for the past, with a seemingly unlimited capacity, but information takes time and effort to get in and out. LTM is not an exact recording but a reconstruction that can be biased and distorted, making eyewitness testimony unreliable. Retrieval depends heavily on how information was initially interpreted. Norman distinguishes between:
- Memory for arbitrary things: Difficult to learn (rote learning) and provides no help when problems arise. Examples include alphabet order and random passwords. Adding artificial structure (mnemonics) can aid memorization.
- Memory for meaningful things: Easier to learn because they connect to existing knowledge and sensible structures. Conceptual models are key to making things meaningful. Professor Sayeki’s reinterpretation of his motorcycle’s turn signal (mapping switch movement to handlebar movement rather than turn direction) illustrates how inventing a meaningful relationship transforms arbitrary tasks into natural ones.
Approximate Models: Memory in the Real World
For practical purposes, approximate models are often “good enough,” even if not scientifically accurate. Norman provides several examples:
- Temperature conversion: An approximation like °C = (°F–30) / 2 is sufficient for everyday use, despite not being exact.
- Short-term memory: Thinking of STM as having “five memory slots” that new items knock out is a useful design approximation, even if not a precise scientific model.
- Motorcycle steering: Professor Sayeki’s conceptual model for turn signals worked for him, even though the actual mechanics of countersteering (turning handlebars right to go left) are counterintuitive. The model is useful because it leads to correct behavior in the desired situation.
- “Good enough” arithmetic: Most people estimate complex arithmetic rather than performing exact mental calculations, using calculators for precision when needed.
These examples demonstrate that science seeks truth, but practice thrives on useful approximations that minimize mental effort and yield sufficiently accurate results.
Knowledge in the Head: How Pilots Remember Air-Traffic Control Instructions
Pilots must remember complex, rapidly delivered air-traffic control instructions. They do so by combining knowledge in the head with knowledge in the world:
- Writing down critical information: Pilots immediately jot down key numbers or instructions.
- Entering information into equipment: They input frequencies or codes into their systems as they hear them.
- Recognizing meaningful phrases: Instructions are often familiar patterns or numbers, reducing the cognitive load.
The design implication is clear: make it easy to externalize new knowledge into relevant equipment as it’s received, minimizing memory errors. The evolution to digital transmission of air-traffic control instructions, allowing information to remain on screen, further supports this.
Knowledge in the Head: Reminding (Prospective Memory)
Prospective memory is the ability to remember to perform an action in the future. Reminders consist of two parts: the signal (something is to be remembered) and the message (what to remember). Many popular reminder methods provide only one: tying a string around a finger provides a signal but no message; a note provides a message but no signal to look at it. The ideal reminder has both components, and appears at the correct time and place. Transferring the burden of memory to the world, such as placing a book in front of the door to remember to take it, is an effective strategy. The proliferation of digital and paper reminder tools highlights the universal need for assistance with prospective memory.
The Tradeoff Between Knowledge in the World and in the Head
There’s a fundamental tradeoff between knowledge in the world and knowledge in the head.
- Knowledge in the world is readily available, self-reminding, and requires less learning. It relies on the designer’s skill to make it interpretable. However, it can be slow to access and difficult to use if cluttered, and its effectiveness depends on the physical stability of the environment.
- Knowledge in the head is efficient and fast once learned, requiring no search or interpretation. However, it requires considerable learning and is fragile (easily forgotten, especially in STM). It offers designers more freedom in appearance since nothing needs to be visible.
The best design combines both, allowing for efficient expert use (knowledge in the head) and aiding non-experts or those performing infrequent tasks (knowledge in the world). The increasing shift to digital, invisible information means a greater burden on memory if not properly supported by design.
Memory in Multiple Heads, Multiple Devices
The concept of transactive memory describes how groups of people collaboratively remember information, with each person contributing a piece of knowledge that no single individual might possess. This “multiple heads” approach enhances collective intelligence. Similarly, reliance on technology as an external memory system (e.g., smart devices for phone numbers or directions) creates a “cybermind” that augments individual cognitive abilities. While this partnership makes us “smarter” and more capable, it also creates dependence on technology. If these external aids are suddenly unavailable, individuals may feel helpless. Norman argues that the combination of humans and artifacts creates a powerful synergy, making us stronger and more adaptable in the modern world.
Natural Mapping
Natural mapping is a critical design principle where the relationship between controls and their effects is obvious and intuitive, often relying on spatial correspondence. The example of stove controls illustrates poor mapping: rectangular burners with linear controls often lead to errors because the relationship isn’t clear without labels. Norman outlines three levels of mapping effectiveness:
- Best mapping: Controls mounted directly on the item controlled.
- Second-best mapping: Controls as close as possible to the item controlled.
- Third-best mapping: Controls arranged in the same spatial configuration as the items controlled.
He advocates for stove designs where controls mimic the burner layout (e.g., in a rectangle), eliminating the need for labels and reducing errors. The automobile seat adjustment control is a prime example of excellent natural mapping, where the control’s shape directly mirrors the seat’s form.
Culture and Design: Natural Mappings Can Vary with Culture
What feels “natural” in mapping can be culture-specific. Norman describes his experience with a projector remote with “up” and “down” buttons that controlled “backward” and “forward” for him, but “forward” and “backward” for some Asian audiences. This stemmed from different conceptualizations of movement: whether the user moves through the images (top button for next) or the images move toward the user (bottom button for next).
Cultural variations also apply to:
- Time: Some cultures conceptualize time as a road the person travels (future ahead), others as time moving towards the person. The Aymara group conceives the past as “in front” (visible, remembered) and the future as “behind” (unseen, unknown), which is perfectly logical.
- Horizontal direction of time: Left-to-right writing cultures often see time flowing left to right; right-to-left writing cultures see it flowing right to left.
- Scrolling: Early computer systems used a “moving window” metaphor (move scrollbar down, text moves up). Touch screens popularized the “moving text” metaphor (finger moves up, text moves up). This shift in metaphor causes confusion when switching between systems.
Norman concludes that design must be sensitive to point of view, choice of metaphor, and culture. While consistency is generally virtuous, when new, vastly superior methods emerge, the benefits of change can outweigh the disruption, provided everyone changes together.
Chapter 4: Knowing What to Do: Constraints, Discoverability, and Feedback
This chapter delves into the practical application of constraints, discoverability, and feedback in everyday objects, showcasing how these principles guide user action even in unfamiliar situations.
Four Kinds of Constraints: Physical, Cultural, Semantic, and Logical
Constraints are powerful clues that limit the set of possible actions, helping users readily determine the proper course of action. Norman identifies four types of constraints:
- Physical Constraints: Rely on the properties of the physical world. A large peg cannot fit into a small hole. These work best when they prevent errors before they are tried, or when the desired action is salient. Norman uses the example of a cylindrical battery that can be inserted in two ways (one correct, one damaging) because it lacks sufficient physical constraints. He proposes solutions like designing batteries to fit only one way, or having contacts that make orientation irrelevant (like Microsoft’s InstaLoad, which unfortunately hasn’t been widely adopted due to legacy problems and corporate conservatism).
- Cultural Constraints: Socially learned restrictions on behavior within a culture. They dictate acceptable actions in specific situations, like how to behave in a restaurant. For the Lego motorcycle, cultural constraints determined the placement of the red brake light (rear) and blue police light (top), and originally the yellow headlight (front, although this has changed over time as yellow headlights became less common). Violating these can cause confusion or offense.
- Semantic Constraints: Rely on the meaning of the situation and world knowledge. For instance, a motorcycle rider must face forward, and a windshield must be in front to protect the face. These constraints are powerful but can also change as new technologies or creative uses emerge.
- Logical Constraints: Dictate actions based on logical relationships. If parts are left over after assembly, it’s logical that something was missed. For the Lego motorcycle, once all other pieces were placed, the blue light’s position was logically constrained to the only remaining spot. Natural mappings often work by providing logical constraints (e.g., left switch for left light).
The Problem With Doors
Doors are a common example where affordances, signifiers, and constraints often fail. Norman reiterates the “Norman doors” problem, where a simple device becomes confusing due to a lack of clear clues for operation. To open a door, users need to know where to act (signifiers like plates or handles) and how to act (affordances and constraints like pushing, pulling, or sliding). When a door requires a “PUSH” or “PULL” sign, it signifies bad design. The panic bar on fire doors is an excellent example of good design, providing a clear signifier and physical constraint for pushing. Conversely, inside car door handles often remain difficult to find and operate. Cabinet doors with hidden push-to-open latches are also frustrating, as the design prioritizes aesthetics over usability, forcing users to pry them open counterintuitively.
The Problem With Switches
Similar to doors, light switches often suffer from poor design, especially in complex environments like auditoriums or offices with banks of identical switches. The fundamental difficulties are determining what device they control and the mapping problem (which switch controls which light). Norman cites the example of a small airplane with identical-looking flap and landing gear switches, leading to frequent and costly errors by pilots. The solution for large rooms is to employ natural mapping, arranging switches in the same spatial configuration as the lights they control (e.g., a two-dimensional layout on a floor plan, as Norman did in his own home). He criticizes the lack of standardized components for such solutions and anticipates a future with wireless, reconfigurable controls and gesture-based interactions, though this might introduce new usability challenges by removing physical affordances.
Activity-Centered Controls
Beyond spatial mapping, activity-centered controls can be more appropriate, especially in complex environments like auditoriums. Instead of grouping controls by device (device-centered), they are grouped by the activity they support (e.g., “video,” “computer,” “full lights,” “lecture”). A well-designed activity-centered system anticipates all needs for a particular activity, such as dimming lights and controlling sound for a video presentation. However, these can fail when exceptional or unanticipated cases arise, such as a lecturer needing to briefly raise lights during a “video” activity, which might inadvertently turn off the projector. The challenge is to make these controls flexible enough for manual overrides without canceling the current activity.
Constraints That Force the Desired Behavior: Forcing Functions
Forcing functions are extreme physical constraints that prevent actions from happening unless certain conditions are met, thereby guaranteeing desired behavior. Starting a car with a key is a forcing function: the car won’t start without the key. In safety engineering, forcing functions include:
- Interlocks: Force operations to occur in proper sequence (e.g., microwave ovens that cut power when the door opens, or car transmissions that require the brake pedal to be depressed to shift out of Park).
- Lock-ins: Keep an operation active, preventing premature stopping (e.g., computer prompts asking to save work before exiting). Norman uses these as an efficient shortcut, allowing him to exit a program without saving, knowing he will be prompted to do so.
- Lockouts: Prevent entry into dangerous spaces or prevent events from occurring (e.g., gates in public stairways preventing access to basements during fires, or child-resistant caps on medicine bottles).
While powerful for safety, forcing functions can be a nuisance, leading users to disable them. Clever design minimizes nuisance while retaining safety.
Conventions, Constraints, and Affordances
Conventions are a special form of cultural constraint that define acceptable behavior and help interpret perceived affordances. A doorknob’s affordance is graspability, but the convention dictates it’s for opening doors. When conventions change, confusion and discomfort arise, as exemplified by destination-control elevators. Traditional elevators allow users to select floors inside the cabin; destination-control systems require users to enter their destination in the hallway, then direct them to a specific elevator. While more efficient, this violates established mental models and conventions, leading to initial user frustration and perceived “bad design” despite its objective superiority. This illustrates that new, superior systems face resistance due to the difficulty of changing ingrained conventions, as seen with the slow adoption of the metric system in some countries.
The Faucet: A Case History of Design
Even a simple water faucet can illustrate numerous design principles and failures. Faucets aim to control water temperature and flow. Design variations, from dual hot/cold controls to single-lever mixers, each introduce different mapping problems.
- Dual-control faucets: Rely on cultural conventions (left for hot, right for cold) and the screw-thread convention (clockwise to close, counter-clockwise to open). When these conventions are violated (e.g., vertical placement or mirror-image rotation), confusion and scalding/freezing result.
- Single-control faucets: While psychologically ideal (controlling temperature and flow directly), they often lack standardization in control dimensions and movement directions. Hidden controls or inconsistent mappings (e.g., is pull for more volume or hotter temperature?) make them frustrating and disliked.
Norman argues that faucet design often violates visible affordances and signifiers, discoverability, and immediacy of feedback. He advocates for standardization as a “principle of desperation” when no intuitive natural mapping is universal, emphasizing that standards should reflect psychological conceptual models, not just physical mechanics.
Using Sound as Signifiers
When visual information is insufficient or unobtainable, sound can serve as a powerful signifier. Natural sounds provide informative feedback about a device’s operation (e.g., a door latch clicking, a car muffler rattling). They convey information about material interactions (hitting, sliding, breaking) that are often missed visually. While artificial sounds (beeps, burps) can confirm button presses, they are often annoying and uninformative due to their generic nature.
The most critical example is the absence of sound, particularly in electric vehicles. These silent cars pose a danger to pedestrians (especially the blind) who rely on engine and tire noises for orientation. This has led to the development of artificial sounds for electric vehicles, posing a complex design challenge: the sounds must be alerting, provide orientation, and not be annoying. Car manufacturers want unique “branding” sounds, while safety advocates push for standardization. This situation highlights the concept of skeuomorphism (incorporating old, familiar ideas into new technologies) and the difficulties of establishing standards under competing pressures.
Chapter 5: Human Error? No, Bad Design
This chapter radically redefines “human error,” arguing that most accidents blamed on human failure are actually the result of poor system design. It details different types of errors and offers design principles to mitigate them.
Understanding Why There Is Error
Norman strongly asserts that the high percentage of accidents blamed on “human error” (75-95%) indicates a fundamental flaw in design, not human incompetence. Unlike bridge collapses or equipment malfunctions, human error is often blamed without deeper investigation. He argues for root cause analysis that goes beyond the immediate human action to understand why the error occurred, focusing on design flaws in procedures and systems. The 2010 F-22 fighter jet crash, initially blamed on pilot error, was later re-evaluated by the Inspector General, suggesting the pilot may have been unconscious due to oxygen deprivation, illustrating how quickly blame can be assigned without proper root cause analysis. Norman advocates for the “Five Whys” technique (repeatedly asking “why” to uncover deeper causes) to prevent stopping analysis prematurely. He argues that if a system allows or induces error, it is badly designed.
Deliberate Violations
Errors are unintentional, but deliberate violations occur when people knowingly deviate from procedures or regulations. This happens for various reasons:
- Rules are often designed for legal compliance rather than work requirements, making them impractical to follow.
- Routine violations become normalized when noncompliance is ignored.
- Situational violations occur under special circumstances (e.g., speeding when late).
When violations lead to success, they are often rewarded, unwittingly encouraging noncompliance. Although violations contribute to accidents, Norman differentiates them from “human error” because they are intentional deviations, often driven by organizational or societal pressures rather than cognitive slips or mistakes.
Two Types of Errors: Slips and Mistakes
Norman and James Reason developed a classification of human error into two major categories:
- Slips: Occur when the intention is correct, but the action performed is wrong (flawed execution). The desired action is not done properly.
- Action-based slips: The wrong action is performed (e.g., pouring milk into coffee, then putting the coffee cup in the refrigerator).
- Memory-lapse slips: The intended action is not done, or its results are not evaluated (e.g., forgetting to turn off a gas burner).
- Slips paradoxically occur more frequently in skilled people because automated, subconscious control leads to a lack of attention.
- Mistakes: Occur when the goal or plan itself is wrong. Even if the actions are perfectly executed, they are part of the error because the plan is inappropriate.
- Rule-based mistakes: The situation is diagnosed correctly, but the wrong rule or procedure is applied.
- Knowledge-based mistakes: The problem is misdiagnosed due to erroneous or incomplete knowledge.
- Memory-lapse mistakes: Forgetting occurs at the higher levels of goals, plans, or evaluation.
Errors can be understood through the seven stages of action: mistakes arise from errors in the higher levels (goal, plan, comparison), while slips occur in the lower levels (specify, perform, perceive, interpret). Memory lapses can affect any transition between stages.
The Classification of Slips
Norman details three types of action slips particularly relevant to design:
- Capture Slips: Occur when a more frequently or recently performed action sequence “captures” the desired but less familiar one. This requires identical initial steps. Example: counting “1, 2, 3, 4, 5, 6, 7, 8, 9, 10, Jack, Queen, King” after playing cards. Designers should avoid procedures with identical opening steps that later diverge.
- Description-Similarity Slips: Occur when the action is performed on an item similar to the target because the description of the target is vague or ambiguous. Example: throwing a sweaty shirt into the toilet instead of the laundry basket because both are “containers.” Designers should ensure controls and displays for different purposes are significantly different and distinguishable (e.g., shape-coded controls in airplane cockpits).
- Mode-Error Slips: Occur when a device has different states (modes) where the same controls have different meanings. Users mistakenly believe the system is in one mode when it is in another. Example: setting an alarm clock’s time when intending to set the alarm. A fatal Airbus accident involved pilots misinterpreting the flight control system’s mode, leading to a dangerous descent rate. Designers should avoid modes if possible, or make them highly visible and distinct if unavoidable, and always account for interruptions that might cause mode confusion.
The Classification of Mistakes
Mistakes result from poor decisions, misclassifications, or incomplete consideration of factors, often due to reliance on biased or reconstructed memories.
- Rule-Based Mistakes: Occur when a wrong rule is selected or applied to a situation. This can happen if the situation is misdiagnosed, the rule itself is faulty, or the outcome is incorrectly evaluated. Example: turning a thermostat to maximum to heat a room faster (based on a false “valve” conceptual model). These are difficult to detect because the chosen action seems logical given the initial (but erroneous) classification. Norman cites the Kiss nightclub fire as an example of a rule-based mistake where guards, following a rule to prevent people from leaving without paying, inadvertently blocked exits during a fire.
- Knowledge-Based Mistakes: Occur in novel situations where no existing skills or rules apply, requiring conscious reasoning and problem-solving. These mistakes happen when the problem is misdiagnosed due to erroneous or incomplete knowledge, leading to solving the wrong problem. Solutions require good conceptual models and collaborative problem-solving.
- Memory-Lapse Mistakes: Happen when forgetting affects goals, plans, or evaluation of the current state, leading to an inappropriate plan or decision. The absence of something that should have been done is often hard to detect.
Detecting mistakes is difficult because chosen actions are consistent with the (wrong) goal, and hindsight bias makes errors seem obvious only after the fact. Norman gives the example of his family misinterpreting highway signs leading them to Las Vegas instead of Mammoth Lakes, because the signs were dismissed as “easily explained.”
Social and Institutional Pressures
Social and institutional pressures significantly contribute to errors and accidents, especially in complex industrial settings. The pressure to keep systems running due to economic cost, or the influence of senior personnel, can lead to deliberate violations or misinterpretations of critical information. The 1977 Tenerife airplane crash (583 fatalities) was a complex tragedy influenced by time pressure, economic pressure, cultural hierarchy (first officer’s reluctance to challenge captain), and poor communication, leading a KLM plane to take off without clearance into a taxiing Pan Am plane in thick fog. Similarly, the 1982 Air Florida crash (78 fatalities) involved pilots taking off with ice on wings despite a first officer’s concerns, due to perceived time pressure and the captain’s authority. These incidents highlight the importance of designing systems and cultures that mitigate these pressures, such as by promoting checklists and safety-first attitudes.
Checklists
Checklists are powerful tools for reducing errors, particularly slips and memory lapses, in complex situations. They are most effective when two people collaboratively follow them: one reads, the other executes, and the first checks. This prevents the “paradox of groups” where individual responsibility is diluted when more people are involved. Checklists are essential in commercial aviation, where they are required for safety. However, they are fiercely resisted in many other industries, including medicine, where they are seen as insulting professional competence. Norman argues that it’s not a threat to competence to be human and prone to error under stress or interruption. Designing effective checklists is challenging and requires iterative, human-centered approaches. Electronic checklists are superior as they track skipped items and ensure completeness.
Reporting Error
Reducing errors requires admitting their existence and collecting data on them. Many institutions are reluctant to reveal errors due to fear of blame or public scrutiny. Norman praises Toyota’s Jidoka system for manufacturing, where workers are encouraged to report errors (sometimes stopping the assembly line) to find the root cause and prevent recurrence (Poka-Yoke or error-proofing, like asymmetrical screw holes or covers for critical switches). He also highlights NASA’s Aviation Safety Reporting System (ASRS), which allows pilots to submit semi-anonymous error reports without fear of punishment, leading to significant improvements in aviation safety. The medical field struggles with similar issues, lacking a neutral reporting body, but is slowly making progress.
Detecting Error
Errors do not necessarily lead to harm if detected quickly.
- Action slips are relatively easy to detect due to the discrepancy between intended and actual action, provided there is feedback.
- Memory-lapse slips are harder to detect because no action is performed, so there is “nothing to see” until an unwanted event occurs.
- Mistakes are the most difficult to detect because the actions taken are consistent with the (wrong) goal, and the faulty diagnosis may seem reasonable at the time. This can lead to “explaining away” anomalies, where warning signs are discounted with logical (but false) explanations.
The hindsight bias makes past events seem obvious and predictable, but during an actual incident, operators face high workload, stress, and overwhelming, often irrelevant, information, making accurate diagnosis extremely difficult. The best accident analyses take a long time to account for this complexity.
Designing for Error
Good design anticipates and minimizes error, but also makes errors easier to discover and correct. Norman’s credo: “Eliminate the term human error.” Instead, view errors as communication or interaction problems, seeing an action as an approximation of what was desired.
Key design principles for dealing with error:
- Understand and minimize causes: Design systems so human limitations are accommodated.
- Do sensibility checks: Machines should query outrageously dangerous or improbable actions (e.g., a radiation dose a thousand times too large, or a financial transfer of a million dollars).
- Make actions reversible (Undo): This is the most powerful tool for mitigating error impact.
- Make errors discoverable and correctable: Provide good, intelligible feedback and clear conceptual models.
- Don’t treat actions as errors; provide help and guidance: Assume partial correctness and facilitate task completion.
Design should accommodate interruptions (a major source of memory-lapse errors) and multitasking (which degrades performance). Warning signals are often ineffective if they are numerous, annoying, or uninformative. Solutions include:
- Adding constraints: Physically separating confused controls, using different-sized openings for fluids (poka-yoke).
- Smart use of Undo: Allowing multiple levels of undo.
- Intelligent confirmations: Focus on the object being acted upon and provide “cancel” options, or silently save unsaved work.
- Minimizing slips: Making controls dissimilar, modes visible, and initial steps of procedures distinct.
Ultimately, good design can prevent slips and mistakes, and save lives.
The Swiss Cheese Model of How Errors Lead to Accidents
James Reason’s Swiss Cheese Model explains that accidents rarely have a single cause. Instead, they result from the alignment of multiple contributing factors, like holes in slices of Swiss cheese. An accident only occurs when all the holes in multiple layers of defense align perfectly. This model teaches two lessons:
- Do not try to find “the” cause of an accident; there are usually multiple causes.
- Design systems to be resilient against failure by adding more layers of defense (“more slices of cheese”), reducing opportunities for error (“less holes”), and using different mechanisms in different subparts of the system (“holes do not line up”).
This principle of design redundancy and layers of defense is crucial for safety in complex systems like commercial aviation, which has dramatically improved safety by applying this model.
When Good Design Isn’t Enough
Norman acknowledges that while poor design is often the root cause, sometimes people really are at fault (e.g., drunk driving, sleep deprivation). However, these instances do not justify the blanket assumption that humans are always to blame. Furthermore, deliberate violations (e.g., a pilot disregarding warnings to meet a schedule) also contribute to accidents, often driven by social and economic pressures. The problem is complex, requiring not just design improvements but also cultural and organizational changes.
Resilience Engineering
For large, complex industrial systems, resilience engineering is an important approach. Its goal is to design systems, procedures, management, and training so that they can respond to problems as they arise and restore services with minimum disruption. It treats safety as a core value, not just a metric, continually assessing and improving against the changing potential for failure. This involves proactive testing (e.g., deliberately causing errors in live systems) to ensure backup systems work, acknowledging that real system failures involve complexities and stresses that simulations cannot replicate.
The Paradox of Automation
Automation improves efficiency and reduces human error in routine tasks. However, it introduces a new paradox: when automation fails, it often does so without warning, leaving humans “out of the loop.” Because humans may not have been paying close attention to automated operations, it takes time to notice, evaluate, and respond to failures, often leading to unexpected and dangerous impacts. The 1997 grounding of the cruise ship Royal Majesty due to a disconnected GPS antenna cable illustrates a huge mode error failure where automation switched to “dead reckoning” without sufficient indication, causing the crew to trust the system blindly for days. This emphasizes that automation should complement human capabilities, not replace them, and needs robust design for failure modes.
Design Principles for Dealing with Error
Norman summarizes key design principles for addressing error:
- Put knowledge in the world: Don’t require all knowledge to be in the head; provide visible cues for non-experts and infrequent operations.
- Use constraints: Employ physical, logical, semantic, and cultural constraints, along with forcing functions and natural mappings, to guide actions.
- Bridge the two gulfs: Provide clear feedforward (what actions are possible) and feedback (results of actions) to make system status understandable and consistent with user goals and expectations.
Design should embrace error as a learning opportunity, seek to understand its causes, and provide assistance rather than punishment. This shift in mindset is crucial for creating truly robust and user-friendly systems.
Chapter 6: Design Thinking
This chapter introduces “design thinking” as a crucial approach to problem-solving, emphasizing the importance of first identifying the correct problem before developing solutions. It outlines the human-centered design process and discusses the challenges of applying it in the real world of business.
Solving the Correct Problem
Norman asserts his consulting rule: “Never solve the problem I am asked to solve.” He argues that the stated problem is often merely a symptom of a deeper, fundamental root problem. Engineers and business professionals are trained to solve problems efficiently, but designers are trained to discover the real problems. A brilliant solution to the wrong problem can be worse than no solution at all. Good designers resist the urge to jump to solutions, instead diverging to thoroughly understand the underlying issues, generating many ideas, and only then converging on a proposed solution. This iterative process is called design thinking.
The Double-Diamond Model of Design
The double-diamond model of design, introduced by the British Design Council, illustrates the two phases of design:
- Finding the Right Problem: This phase involves a divergent “discover” stage (exploring all fundamental issues) followed by a convergent “define” stage (narrowing down to the real, underlying problem statement).
- Finding the Right Solution: This phase also has a divergent “develop” stage (exploring a wide variety of potential solutions) followed by a convergent “deliver” stage (converging on a proposed solution).
This iterative diverge-converge pattern frees designers from premature constraints, but can be unsettling for managers focused on schedules. Norman emphasizes that designers must be allowed to explore freely, but also held to schedule and budget constraints to ensure convergence.
The Human-Centered Design Process
The human-centered design (HCD) process is implemented within the double-diamond model and consists of four iterative activities:
- Observation (Design Research): Deeply understanding the target users in their natural environment (applied ethnography). This research focuses on their activities, goals, and impediments to uncover true needs. It differs from market research by being qualitative and in-depth, rather than large-scale and quantitative. Design research aims to understand what people really need, while market research understands what people will buy. Both are necessary.
- Idea Generation (Ideation): Brainstorming numerous potential solutions without immediate criticism, encouraging creativity and even “stupid” questions that challenge assumptions.
- Prototyping: Building quick, low-fidelity mock-ups (sketches, cardboard models, digital wireframes, even skits) to test ideas. The “Wizard of Oz” technique (mimicking a complex system with human operators behind the scenes) is a powerful early-stage prototyping method. Prototypes help refine both problem understanding and solution validation.
- Testing: Having a small group of target users (often 5, as suggested by Jakob Nielsen) interact with prototypes in realistic conditions. Testing often reveals new insights and allows for iterative refinement.
What I Just Told You? It Doesn’t Really Work That Way
Norman introduces “Don Norman’s Law of Product Development”: “The day a product development process starts, it is behind schedule and above budget.” This highlights the practical challenges of applying ideal HCD in business. Market-driven pressures (matching competition, adding new technology features) often prioritize speed and cost over deep user research, leading to “featuritis” and complexity. Multidisciplinary teams, with representatives from all aspects of the product cycle (design, engineering, marketing, manufacturing, sales, service), are crucial to overcome these challenges. They allow for shared understanding and collaborative problem-solving, preventing later, detrimental, piecemeal changes.
The Design Challenge
Good design is complex due to the multitude of conflicting requirements:
- Client vs. End-User Needs: Purchasers (e.g., housing developers, corporate purchasing departments) prioritize price, appearance, and reliability, often overlooking usability for the actual end-users. This leads to unusable products like office copiers selected for price, not ease of use.
- Internal Stakeholders: Engineers, manufacturing, sales, and service teams also have legitimate needs for the design that must be accommodated to prevent them from making detrimental “after-the-fact” changes. A harmonious, multidisciplinary team is essential from project inception to post-shipment support.
- Designing for “Special People” (Inclusive/Universal Design): There is no “average person.” Designing for all users, including the aged, infirm, handicapped, or those with varying skill levels, often requires flexibility and adjustable solutions (e.g., OXO kitchen tools, which were designed for arthritic hands but marketed as superior for everyone, removing the stigma of “special” design). Making designs inclusive often benefits everyone (e.g., larger, high-contrast lettering).
- Complexity vs. Confusion: Norman argues that complexity is good (life is complex, tools must match), but confusion is bad. Good design tames complexity by providing a clear conceptual model, allowing users to understand the underlying structure even in seemingly chaotic systems (e.g., a well-organized kitchen).
Standardization and Technology
Standardization is a crucial cultural constraint that simplifies life by requiring learning only once (e.g., driving on one side of the road, car pedal layouts). While beneficial for usability, establishing international standards is a laborious, politicized process that can take years, often resulting in compromises or multiple incompatible standards (e.g., metric vs. English units, electrical plug types).
Norman shares his experience with HDTV standardization, which took decades due to technological evolution and intense political battles between industries, resulting in a complex set of standards. Sometimes, technology evolves so quickly that standards become outdated before widespread adoption (e.g., digital time proposals, Swatch’s .beat time). The history of the QWERTY keyboard is another prime example: designed for mechanical limitations, its arbitrary layout became a global standard, inhibiting the adoption of more efficient designs like Dvorak due to legacy momentum and the “good enough” principle.
Deliberately Making Things Difficult
While the book champions usability, some things are deliberately designed to be difficult to use – and should be. Examples include:
- Doors designed to keep people in or out (e.g., a school door with two hard-to-reach latches for handicapped children, preventing them from exiting unsupervised).
- Security systems (passwords, keys).
- Dangerous equipment or operations (safeties on pistols, pins in fire extinguishers).
- Secret compartments or safes.
- Games (where figuring out operation is part of the challenge).
Even for deliberately difficult designs, understanding the principles of good design is essential. One should still design most of the product to be usable, making only the critical, security-related parts difficult by systematically violating usability principles (e.g., hiding components, using unnatural mappings, providing no feedback). This creates a necessary balance between safety/security and usability.
Chapter 7: Design in the World of Business
This final chapter addresses the practical realities and constraints that shape product design in the business world, examining competitive forces, innovation, and the future of design principles.
Competitive Forces
The business world imposes severe competitive pressures on design, prioritizing price, features, and speed. This often hinders the ideal iterative process of human-centered design. Norman recounts his experience with a startup cooking equipment company facing competitors even before launching, forcing difficult choices between user studies, faster development, or unique features, and the need to satisfy not just end-users but also investors and distributors. Established companies face similar pressures, leading to annual model releases that often begin development before the previous model is even launched, and a lack of effective mechanisms for user feedback. This reinforces Norman’s Law of Product Development: projects are always behind schedule and over budget from day one.
Featuritis: A Deadly Temptation
A common problem in successful products is “featuritis,” characterized by “creeping featurism”: the insidious tendency to continuously add more features. This is driven by customer requests, competitive pressure to match rivals, and the need to stimulate sales in saturated markets. The Lego motorcycle example shows its evolution from 15 intuitive pieces to 29 pieces requiring instructions. This constant addition of features, often without removing old, unneeded ones, makes products increasingly complex and difficult to use. Youngme Moon, a Harvard professor, argues in her book Different that this competition-driven design leads to product homogenization, where companies hurt themselves by trying to match every feature of rivals. She advocates focusing on existing strengths and ignoring irrelevant weaknesses. Jeff Bezos’s “customer obsessed” approach at Amazon.com exemplifies this focus on true customer needs over competitive feature-matching.
New Technologies Force Change
New technologies are powerful drivers of change, radically transforming products and interactions. The evolution of telephones, from crank models to smart screens (merging phones, computers, and cameras), illustrates how appearance and operation change dramatically. The demise of physical keyboards on portable devices led to on-screen keyboards and word-gesture typing systems (like Swype), which are efficient but still face the legacy problem of user resistance to new layouts (e.g., changing QWERTY). While technology changes how we do things, fundamental human needs remain unchanged (e.g., writing, communication, entertainment). Norman predicts new media and devices will proliferate, but basic human psychology and design rules will endure.
How Long Does It Take to Introduce a New Product?
The journey from idea to widespread product success is not months, but decades, sometimes centuries. Technology changes rapidly, but people and culture change slowly, creating a simultaneous rapid and slow pace of change.
- Multitouch displays, invented in the early 1980s, took almost three decades to become affordable and robust enough for mass consumer products like the iPhone/iPad. This highlights the gap between research invention and commercial viability.
- The videophone, conceived in 1879, is still barely common as an everyday communication tool today, 140 years later. This demonstrates the immense difficulty in working out the myriad details, manufacturing components, and overcoming public inertia for radical innovations.
- Many excellent innovations fail upon first introduction due to poor timing (e.g., Apple QuickTake camera, early digital picture frames, the first American automobile).
- The typewriter keyboard (QWERTY), designed in the 1870s based on mechanical constraints, became a standard that persists despite more efficient alternatives like Dvorak, due to legacy momentum.
Norman distinguishes between:
- Incremental innovation: Slow, continuous improvements to existing products (e.g., automobiles since Karl Benz’s first car). This is the most common form, often achieved through “hill climbing” (iterative testing and refinement).
- Radical innovation: Paradigms shifts driven by new technologies or redefinition of meaning (e.g., the Internet collapsing publishing, telephone, and TV industries). Most radical ideas fail, and even successful ones take decades to be accepted.
The Design of Everyday Things: 1988–2038
Norman reflects on the enduring relevance of “The Design of Everyday Things,” emphasizing that while technology changes rapidly, people and culture change slowly. The fundamental design principles (discoverability, feedback, affordances, signifiers, mapping, conceptual models) remain constant because they are based on unchanging human psychology. He anticipates future changes like augmented reality, implanted technology, and the rise of cyborgs, which will blur the lines between human and machine.
He discusses the debate of whether technology makes us “smarter” or “stupider,” concluding that the “human plus machine is more powerful than either alone.” He cites chess examples where human-computer teams beat both the best humans and best computers. This “distributed cognition” means our tasks change, freeing the mind from trivial details to focus on higher-level problems.
The Future of Books
Norman questions the traditional linear format of books in a digital age, envisioning dynamic, interactive multimedia books with video, audio, and personalized content. He recounts his early attempt to create such an interactive electronic book in the 1990s, which failed due to technology limitations and lack of full support. While current tools make amateur content creation easy, high-quality, professional multimedia books still require massive talent and resources. This leads to a future with a proliferation of amateur material (e.g., YouTube tutorials) alongside very expensive professional productions.
The Moral Obligations of Design
Design has a moral obligation to society, but faces challenges when design decisions are driven by the capitalistic marketplace’s emphasis on desire and fashion over usability and actual need. This leads to “needless features, needless models,” and planned obsolescence (e.g., manufacturing products to fail or redesigning styles yearly to encourage new purchases), which is good for business but bad for the environment. Norman advocates for a shift to sustainable models like subscriptions or products designed for longevity. The rise of new technologies and “smart screens” risks creating “superfluous, overloaded, unnecessary things.”
Design Thinking and Thinking About Design
Successful design requires the product to be bought, used, and enjoyed. This means satisfying not only human needs for function, understanding, and emotional satisfaction, but also meeting the requirements of manufacturing, engineering, marketing, sales, and service. Design is a complex, multidisciplinary activity where all parties must work together harmoniously.
Norman concludes by urging readers to become active participants in improving design:
- Designers: Fight for usability, understand business, and embrace multidisciplinary collaboration.
- Users: Voice concerns to manufacturers, boycott unusable designs, and support good designs with purchases.
He emphasizes that “it’s not your fault: it’s bad design” when products are frustrating. He encourages observing the world’s design details, appreciating good design, and criticizing bad design constructively.
Finally, Norman introduces “the rise of the small”: the increasing power of individuals and small groups, empowered by inexpensive tools (3D printers, open-source software, self-publishing), to create, design, and manufacture their own products and services. This shifts power globally, enabling “handed-up technology” from developing nations and fostering a renaissance of talent. Despite massive change, fundamental human principles like social interaction and the core design principles of discoverability, feedback, affordances, signifiers, mapping, and conceptual models will remain constant, guiding interactions with even fully autonomous machines.





Leave a Reply