Unlocking the Principles of Product Development Flow: A Comprehensive Summary

In “The Principles of Product Development Flow,” Donald G. Reinertsen challenges the conventional wisdom of product development, arguing that deeply ingrained beliefs, often borrowed from manufacturing, are fundamentally flawed and lead to significant waste and poor performance. Drawing on insights from diverse fields like economics, queueing theory, control engineering, and even military strategy, Reinertsen presents a new paradigm focused on optimizing the flow of value through the development process. His purpose is to equip product developers and executives with the principles and tools to make better economic decisions, understand the true costs of their current practices, and accelerate the adoption of more effective methods, ultimately leading to improved profitability and innovation. This summary delves into every core idea, argument, example, and practical tip presented in the book, providing a thorough yet easily digestible overview of Reinertsen’s transformative approach.

Introduction

This book, “The Principles of Product Development Flow” by Donald G. Reinertsen, serves as a guide for implementing lean and iterative product development. It offers a unique combination of practical methods and clear explanations of the underlying science, aiming to improve decision-making and throughput in product development organizations. By presenting a set of insightful and pragmatic principles, the book challenges traditional approaches and provides a framework for tailoring these principles to the specific economic factors of each enterprise.

The author’s purpose is to accelerate the adoption of a new paradigm in product development, one that emphasizes flow and economic optimization over traditional metrics like efficiency and conformance to plan. The book is structured around 175 principles, organized into clear sections, making it a highly referenced and practical resource for product developers and executives seeking significant improvements in their processes.

The Principles of Flow

This introductory chapter sets the stage by highlighting the fundamental problems with the dominant approach to product development and introducing the concept of Flow-Based Product Development as a powerful alternative. It argues that current orthodoxies are built on flawed beliefs and fail to recognize the true drivers of economic success.

What Is the Problem?

This section identifies and explains the key weaknesses of current product development practices, arguing that they are deeply dysfunctional and self-reinforcing. These problems stem from a set of interconnected and flawed beliefs about how product development should be managed.

  • Failure to Correctly Quantify Economics: Current practices focus on proxy variables instead of true economic impact, leading to poor decision-making. Quantifying the cost of delay (COD) for projects is a critical step missing in most organizations.
  • Blindness to Queues: Product development organizations are often unaware of the significant negative impact of queues, which are the primary cause of poor performance. Design-in-process (DIP) inventory is both physically and financially invisible, contributing to this blindness.
  • Worship of Efficiency: The misguided pursuit of high capacity utilization, driven by a belief that underutilized capacity is waste, leads to excessive queues and long cycle times. This focus on efficiency ignores the much larger economic costs of delay.
  • Hostility to Variability: Variability is incorrectly viewed as inherently bad and something to be eliminated, despite its essential role in innovation. This approach fails to consider the economic consequences of variability and how it interacts with asymmetric payoff functions.
  • Worship of Conformance: Blindly insisting on conformance to the original plan, even when new information emerges, destroys economic value by preventing the exploitation of opportunities and the bypassing of unexpected obstacles.
  • Institutionalization of Large Batch Sizes: Large batch sizes are favored due to a false perception of scale economies and reduced variability, but they actually increase cycle time, delay feedback, and increase risk and overhead.
  • Underutilization of Cadence: The failure to use regular, predictable rhythms for transferring information and coordinating activities leads to increased transaction costs, delayed feedback, and accumulated variability.
  • Managing Timelines instead of Queues: Focusing on managing detailed timelines rather than controlling process queues is less effective because queues are leading indicators of future cycle-time problems.
  • Absence of WIP Constraints: The lack of limitations on work-in-process (WIP) allows queues to grow uncontrollably, leading to longer cycle times and reduced quality, unlike in modern manufacturing and telecommunications.
  • Inflexibility: Prioritizing efficiency over flexibility leads to delays when faced with variability. This inflexibility prevents adaptation to emerging problems and opportunities.
  • Noneconomic Flow Control: Flow control decisions are often based on crude prioritization schemes rather than a sound economic framework, resulting in suboptimal sequencing of work.
  • Centralized Control: A tendency towards centralized control, driven by a focus on efficiency and a fear of chaos, leads to decision-making delays and hinders adaptation to rapidly changing conditions.

This section underscores the interconnected nature of these problems, creating a system where changing one piece in isolation is often ineffective.

A Possible Solution

This section introduces the eight core themes of Flow-Based Product Development, presenting them as a mutually supportive framework for overcoming the limitations of the current orthodoxy. These themes are grounded in economic thinking and draw insights from various disciplines.

  • Economics: The framework emphasizes economically-based decision-making, quantifying the impact of all actions on life-cycle profits. This provides a unifying unit of measure for evaluating trade-offs between conflicting objectives.
  • Queues: Understanding and actively managing queues is central to improving flow. This involves recognizing their invisibility and quantifying their significant economic costs.
  • Variability: A radically different perspective on variability is presented, viewing it as a tool that can create economic value, particularly in the presence of asymmetric payoff functions.
  • Batch Size: Reducing batch size is highlighted as a powerful technique for improving cycle time, reducing variability, accelerating feedback, and lowering risk and overhead.
  • WIP Constraints: Applying constraints on work-in-process is presented as an effective method for controlling queue size, improving flow predictability, and forcing rate-matching between processes.
  • Cadence, Synchronization, and Flow Control: Using regular rhythms (cadence), aligning events in time (synchronization), and implementing economically optimum work sequencing are key tools for managing flow under uncertainty.
  • Fast Feedback: Accelerating feedback loops is crucial for rapidly adapting to unpredictability, truncating unproductive paths, and enabling decentralized control.
  • Decentralized Control: Decentralizing control, particularly for perishable decisions and opportunities, enables faster response times and empowers frontline workers.

These eight themes form the foundation of the Flow-Based approach, offering a coherent alternative to traditional product development methodologies.

The Relevant Idea Sources

This section identifies the key disciplines that provide the theoretical underpinnings and practical insights for the principles presented in the book. Drawing from a diverse range of fields highlights the interdisciplinary nature of effective product development.

  • Lean Manufacturing: Provides mature ideas for managing certain types of flow, particularly the importance of small batches and identifying waste. However, its methods need significant adaptation for the unique characteristics of product development.
  • Economics: Offers a framework for quantifying trade-offs, understanding value-added, and analyzing the economic impact of decisions, particularly in the presence of uncertainty and asymmetry.
  • Queueing Theory: Provides a quantitative understanding of the relationship between capacity utilization, variability, and queue size, enabling better management of waiting lines in development processes.
  • Statistics: Offers insights into the behavior of random variables and random processes, crucial for understanding and managing variability in product development.
  • The Internet: Provides advanced models for managing flow in the presence of high variability, particularly in its use of packet switching, WIP constraints, and dynamic flow control protocols.
  • Computer Operating System Design: Offers advanced methods for managing mixed-priority workflows and unpredictable tasks, providing insights into work sequencing and task management under uncertainty.
  • Control Engineering: Provides principles for designing feedback loops and control systems, enabling better understanding of dynamic response, stability, and the interaction between measurement and intervention.
  • Maneuver Warfare: Offers a compelling model for achieving decentralized control and alignment in uncertain environments, emphasizing initiative, rapid adaptation, and distributed reserves.

These diverse sources contribute to a holistic and robust framework for Flow-Based Product Development.

The Design of This Book

This section explains the chosen structure and style of the book, emphasizing its principle-based approach as a means to increase information density and make the content easily accessible and applicable.

  • Principle-Based Organization: The book is organized into 175 principles, presenting patterns and causal relationships rather than rigid rules or narrative stories. This allows for broad applicability and deeper understanding.
  • Information Density: The principle format aims to maximize the amount of useful content and minimize unnecessary filler.
  • Accessibility and Retrieval: The organization around principles facilitates easy navigation and allows readers to focus on specific areas of interest or retrieve particular information quickly.
  • Moderate Technical Literacy: The book assumes some familiarity with engineering or scientific concepts, including the occasional use of equations where they aid in explaining key ideas.

The book’s design is intended to provide a powerful and practical resource for readers seeking to improve their product development processes.

It Will Be a Long Journey, so Start Now

This concluding section of the chapter encourages readers to begin implementing the principles immediately, emphasizing the value of small, incremental changes and the potential for significant economic benefits even in the early stages of adoption.

  • Adoption Takes Time: New ideas, even compelling ones, are adopted slowly. The Toyota Production System took decades to gain widespread acceptance in Western manufacturing.
  • Small Batches for Implementation: Implementing changes in small batches is recommended over large, top-down initiatives. This approach reduces risk, lowers cost, produces faster results, and accelerates learning.
  • Avoid Waiting for Role Models: Delaying adoption until mature implementations are visible means missing out on significant economic opportunities, as seen in the slow adoption of lean manufacturing in the West.
  • Avoid Waiting for Academic Validation: While academic validation is valuable, practical application of useful ideas can generate significant benefits before they are widely accepted in academia.
  • Start Small and Quickly: Readers are encouraged to begin with small changes, pay attention to feedback, and continuously think about their practices.

This section serves as a call to action, urging readers to embrace the journey of transformation in their product development processes.

The Economic View

This chapter establishes the crucial importance of viewing product development through an economic lens, arguing that this is the only way to make sound decisions and achieve truly valuable improvements. It introduces the concept of a project economic framework as the primary tool for quantifying and evaluating the economic impact of various factors.

The Nature of Our Economics

This section explains the fundamental economic problem in product development, which involves managing multiple interacting variables to maximize overall economic results. It highlights the need for quantification to make informed decisions.

  • The Principle of Quantified Overall Economics: Decisions should be based on their quantified overall economic impact, using life-cycle profits as the ultimate measure. Focusing on proxy variables without understanding their economic influence leads to poor choices.
  • The Principle of Interconnected Variables: Decisions in product development rarely affect only a single variable; they simultaneously influence multiple factors. Quantifying these effects in a common unit of measure is essential for evaluating trade-offs.
  • Example of Quantifying Economic Impact: A simple example of whether to release an immature product to manufacturing demonstrates the value of quantifying costs and benefits to make an economically sound decision.

This section establishes that a clear understanding of economics is necessary to see the true landscape of product development.

The Project Economic Framework

This section introduces the core tool for implementing an economic view: the project economic framework. It explains how this framework is used to quantify the sensitivity of life-cycle profits to various project and product attributes.

  • The Principle of Quantified Overall Economics: A project economic framework quantifies the relationship between measures of performance (like cost, value, expense, cycle time, and risk) and life-cycle profits.
  • The Principle of Quantified Cost of Delay: Quantifying the cost of delay (COD) is highlighted as the single most important factor to measure. It is essential for evaluating the economic importance of cycle time and for quantifying the cost of queues.
  • COD Unlocks Doors: Knowing the COD is critical for evaluating the cost of queues, the value of excess capacity, the benefit of smaller batch sizes, and the value of variability reduction. It transforms decision-making and organizational mindset.
  • COD at Any Milestone: COD can be computed for any milestone, not just the final product launch, allowing for economic evaluation of intermediate delays.
  • The Principle of Economic Value-Added: Value-added is defined as the change in the economic value of the work product, not simply what an informed customer values. Reducing risk, for example, clearly adds economic value.

The project economic framework provides a systematic approach to making economically rational decisions in product development.

The Nature of Our Decisions

This section explores the characteristics of the decisions made in product development, highlighting why traditional decision-making approaches are often ill-suited to this environment.

  • The U-Curve Principle: Many important trade-offs in product development have U-curve optimizations, where the optimum lies between extremes and requires quantification to find.
  • U-Curves are Forgiving: U-curve optimizations have flat bottoms, meaning that even imperfect answers can significantly improve decision-making compared to operating far from the optimum.
  • The Imperfection Principle: It is more important to improve decision-making than to achieve perfect analysis. Imperfect economic frameworks are still valuable and lead to better choices than intuition alone.
  • The Principle of Small Decisions: The collective impact of many small economic decisions is enormous. Companies should focus on providing systems that support correct decision-making at low levels of the organization.
  • The Pareto Paradox: Focusing excessively on the highest-leverage problems (as suggested by the Pareto Principle) can lead to undermanaging the majority of issues, where significant opportunities for improvement often reside.
  • The Principle of Continuous Economic Trade-offs: Economic choices must be made continuously throughout the development process, not just at the beginning. New information constantly emerges, making it necessary to re-evaluate previous decisions.
  • Conformance vs. Adaptation: Blindly conforming to the original plan, even when new information changes the economics, destroys value. Decision-making should be based on the freshest economic information.
  • The First Perishability Principle: Many economic choices are more valuable when made quickly. Opportunities and obstacles age poorly, necessitating fast decision-making and decentralized control.
  • The Subdivision Principle: Most decisions can be decomposed into component parts with distinct economics. Identifying and keeping the economically attractive parts allows for reshaping bad choices into better ones.

Understanding the nature of these decisions is crucial for designing effective control strategies and information systems.

Our Control Strategy

This section outlines a strategy for influencing product development decisions, emphasizing the importance of harvesting early opportunities, decentralizing control with decision rules, and using market mechanisms where appropriate.

  • The Principle of Early Harvesting: Systems should be designed to capture early, cheap opportunities to improve economic performance (e.g., buying cycle time early in the project).
  • The First Decision Rule Principle: Decision rules, derived from the economic framework, are powerful tools for decentralizing economic control while maintaining alignment and streamlining the decision-making process.
  • Example of Decision Rule (Boeing 777): Boeing used a decision rule to allow engineers to make system-level optimum trade-offs between weight and unit cost without requiring management approval for every decision.
  • Control without Participation: Decision rules enable management to control the economic logic of decisions without being involved in every individual decision.
  • The First Market Principle: Decision makers should feel both the cost and the benefit of their decisions to make good economic choices. Market mechanisms, like pricing, can be used to manage demand for scarce resources.
  • Centrally Controlled Economies vs. Markets: Traditional development organizations often resemble centrally controlled economies, leading to inefficiencies and lobbying for resources. Market mechanisms can provide decentralized control.
  • Example of Congestion Pricing: A manager used differentiated pricing for CAD services (standard vs. premium turnaround time) to align project decisions with overall resource availability and costs.
  • The Principle of Optimum Decision Timing: Each decision has an optimum economic timing based on how its cost, payoff, and risk change over time. Decisions should not be made too early or too late.
  • Timing Based on Economics, Not Philosophy: Decision timing should be driven by the economic impact of waiting, not by broad concepts like “front-loading” or “responsible deferral.”

This section provides practical guidance on how to translate economic understanding into a viable control strategy.

Some Basic Economic Concepts

This section introduces three general economic principles that are particularly relevant to product development and can lead to better decision-making: marginal economics, sunk costs, and the value of information.

  • The Principle of Marginal Economics: Decisions should always be evaluated by comparing the marginal cost and marginal value of a change, not the total cost and total value.
  • Example of Marginal Economics (Feature Creep): Adding low-priority features may seem desirable based on total value but may not be justified when considering the marginal cost and marginal value.
  • Example of Marginal Economics (Feature Shortfalls): When a feature falls short of its objective, working on it is only economically justified if the marginal gain outweighs the marginal cost, compared to other opportunities.
  • The Sunk Cost Principle: Money already spent (sunk cost) should not be considered when making future economic choices. Decisions should be based on the return on the remaining investment.
  • Example of Sunk Cost (Project Prioritization): When prioritizing projects, the decision should be based on the return on the remaining investment, not the total historical investment.
  • The Principle of Buying Information: The value of information is its expected economic value, measured by its ability to reduce uncertainty and improve economic outcomes. Investments can create value even if they don’t lead to successful products by generating valuable information.
  • Example of Buying Information (Clinical Trials): Clinical trials, though expensive, generate valuable information that reduces risk and allows for early termination of doomed drug candidates, thereby avoiding future costs.
  • Optimum Sequence for Risk Reduction: There is an economically optimum sequence for risk-reduction activities, prioritizing low-cost activities that remove significant risk early in the process.

These principles provide fundamental economic insights that can be applied to various product development scenarios.

Debunking Some Popular Fallacies

This section uses the economic view to challenge two prevalent but flawed ideas in product development: the unqualified advocacy of set-based concurrent engineering (SBCE) and the assumption that a high product failure rate necessarily indicates poor management.

  • The Insurance Principle: Set-based concurrent engineering (SBCE), which involves developing multiple backup solutions in parallel, is an insurance policy that trades development expense for risk reduction. The economic benefit of this risk reduction must justify the cost of the insurance.
  • SBCE as a U-Curve Optimization: The optimum number of parallel paths in SBCE is an economic trade-off (a U-curve optimization) where incremental value equals incremental cost. Parallel paths are not always economically sensible.
  • The Newsboy Principle: A high probability of failure does not necessarily equal bad economics. In situations with strong economic asymmetries (where the gain from success is much larger than the cost of failure), the optimum success rate can be surprisingly low.
  • Example of Newsboy Principle (New Product Success Rates): A high failure rate for new products may reflect the asymmetric payoffs in product development (potential for blockbuster success vs. limited investment in early failures) rather than poor management competence.

These examples demonstrate how applying economic thinking can challenge commonly held but flawed beliefs.

The Show Me the Money Principle

This concluding principle offers practical advice on communicating effectively with stakeholders who control financial resources.

  • The Show Me the Money Principle: To influence financial decisions and gain support from management, speak the language of economics (money) rather than the language of proxy variables.
  • Compelling Economic Arguments: Presenting well-quantified cost and benefit analyses in an economic context leads to faster decisions and enthusiastic support from management.

This principle emphasizes the importance of translating technical and operational issues into economic terms to resonate with key decision-makers.

Managing Queues

This chapter dives deep into the critical but often invisible problem of queues in product development, explaining their behavior, economic impact, and how to effectively manage them. It argues that queues are a primary source of waste and significantly hinder performance.

Queueing Theory

This section provides a brief introduction to queueing theory, explaining its origins and basic concepts, highlighting its relevance to product development due to the unpredictable nature of work arrival and task durations.

  • Origins of Queueing Theory: Developed to understand and manage unpredictable demands in telephone networks, queueing theory offers valuable insights for systems with random arrivals and service times.
  • Basic Vocabulary: Key terms like queue, server, arrival process, service process, and queueing discipline are introduced.
  • Kendall Notation: The M/M/1/∞ notation is explained as a way to describe simple queueing systems.

This introduction sets the stage for understanding the behavior and impact of queues in product development.

Why Queues Matter

This section emphasizes the significant economic importance of queues in product development, explaining why they are often poorly managed and the various forms of waste they create.

  • Economic Importance: Queues are economically important because they cause valuable work products to sit idle, increasing inventory costs and leading to numerous other problems.
  • Poor Management: Queues are often poorly managed because they are invisible and product developers are typically unaware of their detrimental effects.
  • The Principle of Invisible Inventory: Product development inventory (design-in-process or DIP) is both physically and financially invisible, contributing to a lack of awareness and management.
  • Observable Effects of DIP: High DIP is observable through its effects: increased cycle time, delayed feedback, shifting priorities, and the need for status reporting.
  • The Principle of Queueing Waste: Queues are the root cause of the majority of economic waste in product development, causing damage in multiple ways.
  • Six Sources of Queueing Waste: Queues increase cycle time, increase risk, increase variability, raise costs (overhead), reduce quality (delayed feedback), and reduce motivation.
  • Queues off the Critical Path: Even queues not on the critical path incur economic damage through delayed feedback, increased overhead, reduced quality, and raised variability.
  • Lack of Measurement: A significant majority of product developers do not measure queues, indicating a widespread problem of blindness and poor management.
  • Root Cause in Economics: The ultimate root cause of queue blindness is the failure to correctly quantify economics, as queues appear to be free when their costs are not measured.

This section builds a strong case for why understanding and managing queues is fundamental to improving product development performance.

The Behavior of Queues

This section delves into what queueing theory tells us about how queues behave, focusing on the impact of capacity utilization, variability, and the structure of the queueing system.

  • The Principle of Queueing Capacity Utilization: Capacity utilization has an exponential impact on queue size. Approaching 100% utilization leads to exponentially large queues, a state common in product development.
  • Quantitative Relationship: Formulas demonstrate how capacity utilization (rho) predicts queue size, system occupancy, queue time, and the ratio of cycle time to value-added time for an M/M/1/∞ queue.
  • Calculating Utilization from Queue Size: Because direct measurement of capacity utilization can be difficult, queue size and cycle time can be used to calculate utilization.
  • The Principle of High-Queue States: While low-queue states are more probable, high-queue states cause disproportionately more economic damage because they delay more jobs for longer periods.
  • The Principle of Queueing Variability: Variability (in arrivals and service times) increases queue size linearly. The impact of variability is less significant than the exponential impact of capacity utilization.
  • Allen-Cuneen Heavy Traffic Approximation: This equation shows that under heavy traffic, queue size is proportional to the variance of arrival and service times.
  • Variability Reduction Alone is Insufficient: Reducing variability alone (e.g., to zero service time variability) only halves average queue size and does not eliminate queues, as arrival variability still exists.
  • Capacity Margin vs. Variability Reduction: Capacity margin is often a more effective tool for fighting queues than variability reduction.
  • The Principle of Variability Amplification: Operating at high levels of capacity utilization amplifies variability. Small changes in loading translate into very large changes in cycle time when utilization is high.
  • Self-Inflicted Variability: The variability caused by high utilization is a self-inflicted wound, making the system more unpredictable.
  • The Principle of Queueing Structure: The structure of a queueing system affects its performance. A single shared queue for multiple servers (M/M/n) performs better than individual queues for each server (M/M/1).
  • Advantages of Shared Queues: Shared queues lead to lower variance in processing times and smaller overall queues.
  • Single vs. Multiple Servers: The choice between a single high-capacity server and multiple lower-capacity servers involves trade-offs related to processing speed, robustness, and the impact of individual bad jobs.
  • The Principle of Linked Queues: In linked queues, the output pattern of one queue becomes the arrival pattern for the next. The queue can act as a buffer, conditioning the flow.
  • Examples of Linked Queues (Traffic): Stoplights at freeway on-ramps condition the flow of cars to reduce turbulence and improve throughput at bottlenecks.
  • Upstream Process Impact: The process immediately upstream of a bottleneck significantly affects the queue at the bottleneck by determining the variability in the arrival rate.

This section provides the theoretical foundation for understanding why queues form and how they behave in product development contexts.

The Economics of Queues

This section focuses on the economic implications of queues, explaining how to optimize queue size by balancing the cost of capacity against the cost of delay and how queueing discipline can reduce queue cost without reducing queue size.

  • The Principle of Queue Size Optimization: The optimum queue size is an economic trade-off between the cost of capacity and the delay cost of the queue.
  • Quantitative Optimization: The equation for optimum queue size shows that required excess capacity margin is proportional to the cost of delay and inversely proportional to the cost of capacity.
  • Quantitative vs. Qualitative Approach: A quantitative approach to queue optimization allows the solution to adapt to changes in the economic environment, unlike a static qualitative approach (e.g., “Queues are evil”).
  • Capacity Margin vs. Variability Reduction (Economic): Capacity margin is generally a more economically effective weapon for fighting queues than variability reduction.
  • The Principle of Queueing Discipline: Queue cost is affected by the sequence in which jobs in the queue are handled (queueing discipline).
  • Homogeneous vs. Nonhomogeneous Jobs: Queueing discipline matters most when jobs are nonhomogeneous in terms of delay costs and task durations, unlike in manufacturing where FIFO is often optimal.
  • Simple Heuristics: Two simple heuristics for nonhomogeneous jobs are: high cost of delay before low cost of delay, and short jobs before long jobs.
  • Payoff from Queueing Discipline: The payoff from thoughtful queueing discipline is highest when queue sizes are large.
  • Reliance on Cost of Delay: Any economically grounded queueing discipline relies on knowing the cost of delay for the jobs in the queue.

This section provides the economic rationale for managing queues and highlights the importance of cost of delay and appropriate queueing disciplines.

Managing Queues

This section provides practical principles and tools for monitoring and controlling queues in product development, introducing the cumulative flow diagram and Little’s Formula as essential instruments.

  • The Cumulative Flow Principle: Cumulative Flow Diagrams (CFDs) are highly useful tools for monitoring queues visually over time, showing cumulative arrivals and departures, queue size (vertical distance), and cycle time (horizontal distance).
  • CFD Information: CFDs provide information on demand, capacity, queue size, cycle time, and trends in demand and capacity.
  • Advantage of CFDs: CFDs provide more information than simply tracking queue size, allowing identification of whether queues are caused by excess arrivals or insufficient departures.
  • Batch Size Visibility on CFDs: Batch size problems are also visible on CFDs as jagged lines.
  • Little’s Formula: Little’s Formula states that average queue time equals average queue size divided by average processing rate (Wait Time = Queue Size / Processing Rate).
  • Robustness of Little’s Formula: This formula is robust and applies to almost all queueing disciplines, arrival processes, and departure processes.
  • Applying Little’s Formula to the System: Little’s Formula can be applied to the system as a whole to predict cycle time from total WIP and completion rate.
  • Calculating Queue Size from Cycle Time: Little’s Formula can be used to determine equivalent queue size even when people claim no jobs are in queue.
  • Determining Queue Time: Queue time can be determined by comparing cycle time with value-added time or by calculating average service time from applied hours and throughput.
  • Critical Path vs. Off-Critical Path Queues: Only queues on the critical path increase total cycle time and generate delay cost, but off-critical path queues still incur other economic damage.
  • Stochastic Nature of Critical Path: Tasks have probabilities of being on the critical path, and this probability contributes to their cost of delay.
  • The First Queue Size Control Principle: It is more practical to control queue size directly than to control cycle time by controlling capacity utilization, due to the difficulty of accurately estimating demand and capacity.
  • The Steep Slope Advantage: Small changes in capacity utilization lead to large changes in queue size, making queue size a powerful control variable.
  • Supermarket Example: Supermarkets use queue size to trigger the opening and closing of check stands, effectively managing quality of service in the face of uncertain demand.
  • The Second Queue Size Control Principle: Queue size is a better control variable than cycle time because it is a leading indicator, providing earlier warning of emerging problems.
  • Example of Leading Indicator (Airport Immigration): An unexpected influx of passengers immediately increases queue size, but cycle time won’t reflect this increase until much later.
  • The Diffusion Principle: Over time, cumulative random variables (like queue size) tend to diffuse further and further from their mean, making sustained high-queue states probable and persistent.
  • Coin Flipping Example: A simple coin-flipping experiment illustrates how random processes can drift significantly from their starting point over time.
  • The Intervention Principle: Randomness can create a queue, but it cannot be relied upon to correct it. Quick and decisive intervention is necessary to prevent queues from persisting in expensive high-queue states.
  • Delayed Intervention Cost: The longer intervention is delayed, the more expensive the problem becomes.

This section provides actionable principles and tools for monitoring and actively managing queues in product development processes.

Round Up the Usual Suspects

This section lists common areas in product development processes where queues are typically found, making it easier for organizations to identify potential opportunities for improvement.

  • Marketing: Often a source of queues at the front end due to mismatch between capacity and variable demand.
  • Analysis: Frequent queues due to high capacity utilization and high variability in specialized and expensive resources.
  • CAD: Common queues resulting from restricted capacity and variable demand in specialized resources.
  • Purchasing: Queues often occur because R&D support is a small portion of their workload, and their processes and incentives are optimized for manufacturing needs.
  • Prototyping: Common location for queues due to specialized resources, variable demand, high efficiency focus, and limited excess capacity.
  • Testing: Probably the single most common and dangerous critical-path queue, often occurring late in development due to limited capacity, variable demand, and large batch sizes.
  • Management Reviews: Phase-gate processes, with their large batch transfers, inherently create queues at review gates.
  • Tooling: Common queues due to specialized resources, focus on efficiency, and waiting for complete information.
  • Specialists: Almost any specialist can become a bottleneck and a source of queues because they are scarce and managed for efficiency.

This list provides practical starting points for organizations looking to identify and address queueing problems in their processes.

Exploiting Variability

This chapter challenges the conventional wisdom that variability is always bad in product development, arguing that it is essential for innovation and can be exploited to create economic value, particularly in the presence of asymmetric payoffs.

The Economics of Product Development Variability

This section explains why variability is not inherently bad in product development and how its economic impact is determined by the payoff function, not just the amount of variability.

  • Variability and Value Creation: Product development creates value by changing designs, which inherently introduces uncertainty and variability. Risk-taking is central to value creation.
  • Manufacturing vs. Product Development: Unlike manufacturing, where reducing variability always improves economics, product development must distinguish between variability that increases and decreases economic value.
  • The Principle of Beneficial Variability: Variability can create economic value. A high-variability technical path can be the best economic choice even if it has a lower probability of success than a low-variability path, due to its potential for higher payoffs.
  • Payoff Function is Critical: Economic decisions in the presence of variability depend on both the probability distribution and the economic payoff function. Focusing only on probability (reducing variability) is insufficient.
  • The Principle of Asymmetric Payoffs: Asymmetric payoff functions, where the gain from a positive outcome is significantly larger than the loss from a negative outcome, enable variability to create economic value.
  • Option Pricing Model Relevance: The logic behind the Black-Scholes option pricing model, where higher volatility (variability) increases the value of options due to asymmetric payoffs (limited downside, unlimited upside), is relevant to understanding variability in product development.
  • Product Development Payoff Functions: Product development projects often have asymmetric payoff functions (unlimited downside, limited upside) due to the potential for high costs in addressing shortfalls and limited gains beyond a certain performance level.
  • Higher Variability Can Create Higher Value: In the presence of such asymmetries, higher variability can increase economic value by extending the distribution into high-payoff regions.
  • The Principle of Optimum Variability: Variability should neither be minimized nor maximized. The optimal amount of variability is that which maximizes the expectation of economic payoff, considering both positive and negative outcomes.
  • Balancing Gains and Losses: Optimal variability occurs where the incremental gain from increased variability on the positive side of the payoff function is equal to the incremental cost on the negative side.
  • The Principle of Optimum Failure Rate: For generating information, the optimum failure rate in a test or experiment is typically 50%, as this point maximizes the information content.
  • Information Content and Surprise: The information content of an event comes from the degree of surprise associated with its outcome. Testing to failure is valuable because it yields high information content.
  • Exploratory vs. Validation Testing: Product developers should distinguish between exploratory testing (optimized for information generation, aiming for ~50% failure) and validation testing (optimized for high success rates).
  • Economic Value of Information: Investing in tests and experiments should be justified by the economic value of the information they generate (risk reduction), compared to the cost of the test.

This section fundamentally shifts the perspective on variability from a source of waste to a potential driver of economic value.

Reducing Variability

This section focuses on methods for reducing the amount of variability in product development processes, recognizing that this is a valid strategy when variability decreases economic value.

  • Two Approaches to Improve Economics of Variability: We can either change the amount of variability or change its economic consequences.
  • The Principle of Variability Pooling: Overall variation decreases when uncorrelated random tasks are combined. The coefficient of variation for the combined process is lower than for individual components.
  • Example of Variability Pooling (Shared Queues): Combining demand from multiple sources into a single shared queue for multiple servers reduces variability and leads to smaller queues and better response times.
  • Example of Variability Pooling (Estimating Work Content): Aggregating estimates for many small tasks reduces the relative noise in the overall estimate compared to individual task estimates.
  • Scheduling Granular Tasks: While granular estimates are good for assessing aggregate scope, scheduling tasks at that level of detail increases overall schedule variability.
  • The Principle of Short-Term Forecasting: Forecasting becomes exponentially easier and less variable at shorter time horizons.
  • Long Planning Horizons Increase Uncertainty: Trying to reduce forecasting risk by extending the planning horizon through more reviews actually increases uncertainty exponentially.
  • Shortening Planning Horizons Reduces Variability: Reducing queues and cycle time shortens planning horizons, leading to more reliable forecasts and reduced risk. This creates a regenerative feedback loop.
  • Forecasting Technology: Short time horizons also help reduce compound errors when forecasting the evolution of multiple technologies.
  • Queues Force Long Planning Horizons: Large queues increase flow-through time, forcing longer planning horizons and increasing forecasting risks.
  • The Principle of Small Experiments: Many small experiments or steps produce less variation in the overall outcome than one big one, even if the total risk is the same. The coefficient of variation decreases with the number of trials.
  • Example of Small Experiments (Product Line Strategy): Introducing innovation in a series of smaller steps reduces the likelihood of complete failure compared to a single blockbuster product with many innovations.
  • The Repetition Principle: Repeating similar small tasks systematically reduces variability by allowing for process optimization and standardization.
  • Example of Repetition (Daily Build/Test): Moving to daily build/test cycles incentivizes the routinization and optimization of processes like code check-in, reducing variability.
  • The Reuse Principle: Design reuse reduces variability by eliminating uncertainty in completion time and reducing capacity utilization.
  • Economic Justification for Reuse: Reuse must still be economically justified and should not be pursued blindly if a new, higher-value technology is available.
  • Effective Reuse with Small Modules: Effective reuse is often achieved with smaller, less complex modules that have wider applicability.
  • Reusing Design Logic: Reusing the logic of design (e.g., through automated design tools) can be even more productive than reusing specific design instantiations.
  • The Principle of Negative Covariance: Variability can be reduced by applying a counterbalancing effect, creating a negative covariance between variables.
  • Example of Negative Covariance (Cross-Training): Cross-training workers to assist during peak demand periods offsets random increases in demand with compensating changes in capacity, reducing queue size variability.
  • Example of Negative Covariance (Stock Market): Diversifying a financial portfolio with uncorrelated stocks reduces overall volatility.
  • The Buffer Principle: Buffers trade money for variability reduction. Inserting safety margins in schedules reduces the chance of missing deadlines but at the cost of increasing cycle time.
  • Buffers Convert Uncertainty to Certainty: Buffers convert uncertain earliness into certain lateness. Trading cycle time for reduced schedule variability is often economically disadvantageous.

This section provides practical methods for reducing the amount of variability when it is economically desirable to do so.

Reducing Economic Consequences

This section explores the alternative approach to improving the economics of variability: changing its economic consequences rather than just its amount. This is particularly relevant when variability creates economic value.

  • The Principle of Variability Consequence: Reducing the economic consequences of variability is often the best way to reduce its cost, rather than simply reducing the amount of variability.
  • Altering the Payoff Function: This approach focuses on changing the economic payoff function, either by reducing costs on the negative side (truncating bad paths) or increasing gains on the positive side (exploiting opportunities).
  • Manufacturing vs. Product Development Payoff Functions: In manufacturing, deviations from the mean are symmetrically bad. In product development, “left side” outcomes are bad, but “right side” outcomes can be good.
  • Feedback Alters Asymmetry: Fast feedback loops are key to altering payoff asymmetries by truncating bad paths quickly and enabling rapid exploitation of opportunities.
  • Example of Consequence Reduction (Software Bug): Rapid feedback on software bugs significantly reduces their economic consequences by allowing programmers to stop making bad assumptions quickly and by reducing the amount of dependent code that needs to be modified.
  • Order of Magnitude Reduction: The combined effects of feedback on problem frequency and consequences can lead to order of magnitude reductions in the cost of variability.
  • The Nonlinearity Principle: Systems behave linearly within a certain range, but performance can degrade dramatically when operating outside this range (zones of nonlinearity). It is important to avoid these zones.
  • Example of Nonlinearity (Sailboat Capsizing): A sailboat’s stability changes nonlinearly beyond a certain angle of heel, leading to capsizing.
  • Example of Nonlinearity (Project Profits and Delays): The economic damage done by project delays can become geometrically larger beyond a certain point.
  • Circuit Breakers: To protect against extreme left-side events, circuit breakers are needed to stop consuming resources and time when entering zones of severe economic damage.
  • The Principle of Variability Substitution: The cost of variability can be reduced by substituting variability in an inexpensive measure of performance for variability in an expensive measure of performance.
  • Example of Variability Substitution (Expediting): Paying expedite charges for parts substitutes variability in expenses for variability in schedule, recognizing that missing schedule is more expensive than exceeding the expense budget.
  • Trade-offs with Economic Understanding: Understanding the economic cost of variability allows for informed trade-offs between different performance parameters.
  • The Principle of Iteration Speed: It is usually more effective to improve iteration speed (increasing the number of cycles) than to reduce the defect rate per iteration, particularly when defect rates are below 50%.
  • Mathematical Leverage: Iteration speed has higher mathematical leverage in reducing ultimate defect rates than defect rate per iteration.
  • Queues Dominate Iteration Time: In most product development processes, iteration time is dominated by queue time, so reducing queues significantly improves iteration speed.
  • The Principle of Variability Displacement: Variability should be moved to the process stage where its economic cost is lowest.
  • Example of Variability Displacement (Air Traffic Control): Holding inventory (planes) is less expensive on the ground at the departure airport than in a holding pattern at the destination, so variance is displaced to the cheapest stage.
  • Example of Variability Displacement (Starting Projects): Holding new opportunities in a ready queue before they enter the product development pipeline is often less expensive than letting them accrue costs and increase overhead within the system.
  • Example of Variability Displacement (Software Bugs): Limiting the number of active bug fixes and holding others in a ready queue ensures that development time is spent on fresh, important problems.

This section provides powerful strategies for managing the economic impact of variability, particularly by altering the consequences of both positive and negative outcomes.

Reducing Batch Size

This chapter highlights the critical and often unrecognized importance of reducing batch size in product development, arguing that it is one of the most powerful levers for improving flow, reducing risk, and increasing efficiency.

The Case for Batch Size Reduction

This section builds a compelling argument for reducing batch size by explaining the numerous benefits it brings to product development processes.

  • Batch Size is Unrecognized: Product developers typically do not think in terms of batch size, leading to underutilization of a key improvement tool.
  • Past Improvements as Batch Size Reductions: Many significant improvements in product development (like concurrent engineering and agile methods) are actually recognizable as batch size reductions.
  • The Batch Size Queueing Principle: Reducing batch size directly reduces cycle time by decreasing average queue size, without necessarily changing average arrival or departure rates.
  • Batch Size Variability Principle: Reducing batch size reduces variability in flow by preventing large batches from overloading processes periodically.
  • Batch Size Reduction as a Tool for Variability Reduction: Batch size reduction is a cheap, simple, and powerful way to reduce both variability and queues.
  • The Batch Size Feedback Principle: Reducing batch size accelerates feedback, which is particularly important in product development for truncating unproductive paths quickly and reducing the cost of failures.
  • Exponential Rework Cost: Rework becomes exponentially more expensive when feedback is delayed because incorrect assumptions are embedded in more dependent work.
  • Batch Size Risk Principle: Reducing batch size reduces risk through three factors: decreased cycle time (less exposure to change), decomposition of large risks, and smaller consequences of errors due to accelerated feedback.
  • Example of Risk Reduction (Internet Packets): Breaking messages into small packets significantly reduces the risk of transmission errors and the overhead associated with resending corrupted data.
  • Batch Size Overhead Principle: Contrary to intuition, large batches often increase overhead due to increased debug complexity and the need for more frequent status reporting on a larger number of open items.
  • Batch Size Efficiency Principle: Large batches reduce overall efficiency. While they may seem efficient locally, they destroy important feedback loops and increase rework.
  • Example of Efficiency Loss (Drawing Review): Reviewing drawings in one large batch delays feedback, allowing incorrect assumptions to be embedded in more drawings, increasing rework and overall inefficiency.
  • Example of Efficiency Loss (Software Debugging): Debugging large changes is exponentially more complex and less efficient than debugging small changes due to the increased number of interactions.
  • Freshness of Work: Engineers are more efficient when working on something fresh in their mind, and small batches facilitate this.
  • The Psychology Principle of Batch Size: Large batches lower motivation and urgency by diluting responsibility and delaying feedback.
  • Loss of Accountability: With large batches, individuals feel less responsible for the overall outcome because delays are likely to be caused by others.
  • Delayed Reinforcement: Slow feedback delays the positive reinforcement of success, reducing motivation.
  • The Batch Size Slippage Principle: Large batches tend to cause exponential cost and schedule growth due to increasing complexity and the accumulation of delays.
  • Empirical Evidence: Empirical analysis often shows significantly higher percentage slippage for longer-duration projects.
  • The Batch Size Death Spiral Principle: Large batches can create a regenerative death spiral, where delays lead to adding more features and resources, further extending the schedule and increasing complexity and risk.
  • Zombie Projects: Large projects can become “zombie projects” that are not good enough to get sufficient resources but are not bad enough to kill, destroying flow and devouring resources.
  • Batch Size Magnets: Large batches can act as magnets, attracting additional cost, scope, and risk, leading to their uncontrolled growth.
  • The Least Common Denominator Principle of Batch Size: When items are batched together, the entire batch acquires the properties of its most limiting element, often increasing overall cycle time.
  • Strict Precedence: Large batches often force strict precedence relationships, preventing downstream activities from beginning until all upstream activities are complete.

This section provides a comprehensive overview of the compelling benefits of reducing batch size in product development.

The Science of Batch Size

This section delves into the economic and technical principles behind batch size optimization, explaining the trade-offs involved and the importance of reducing transaction costs to enable smaller batches.

  • Economic Trade-offs: The optimum batch size is found by balancing a transaction cost against a holding cost, a concept embodied in the Economic Order Quantity (EOQ) equation.
  • The Principle of Batch Size Economics: Economic batch size is a U-curve optimization, where minimizing total cost involves finding a balance between transaction and holding costs.
  • U-Curve Properties: The batch size U-curve is continuous (allowing for incremental changes), reversible (allowing for adjustments), and forgiving of errors (allowing for acceptable performance even with imperfect estimates).
  • The Principle of Low Transaction Cost: Reducing the transaction cost per batch is the primary factor enabling smaller batch sizes and lowering overall costs.
  • Japanese Manufacturing Insight: Japanese manufacturers (like Toyota) demonstrated that what were considered “fixed” transaction costs could be drastically reduced (e.g., through SMED).
  • Transaction Cost Reduction is Regenerative: Reducing transaction costs increases transaction volume, which justifies further investment in transaction cost reduction.
  • The Principle of Batch Size Diseconomies: The EOQ equation often underestimates the benefits of smaller batches because it doesn’t fully account for batch size diseconomies (costs that increase with batch size, like debug complexity) and nonlinear holding costs.
  • Hidden Costs of Large Batches: The hidden costs of large batches (like exponentially increasing debug and status reporting overhead) mean that calculated optimum batch sizes are often too large.
  • Test Assumptions Aggressively: Organizations should aggressively test their assumptions about optimum batch size by reducing it and measuring the results, as the benefits are often underestimated.
  • Heuristic for Aggressive Reduction: A useful heuristic is to aim for a batch size that is at least 30% smaller than what is calculated as optimal, as the cost of being wrong is relatively low.
  • The Batch Size Packing Principle: Small batches allow for finer tuning of capacity utilization by enabling better packing of work, particularly when some large batches are present.
  • Example of Batch Size Packing (Internet Packet Switching): Packet switching enables the exploitation of tiny windows of available capacity, improving overall utilization compared to circuit switching.
  • The Fluidity Principle: Loose coupling between product subsystems, enabled by modular architecture and stable interfaces, allows for work to flow in small, decoupled batches and improves flexibility in routing and sequencing.
  • Small Modules Enable Reuse: Small modules have less complex interfaces and more potential applications, making them easier to reuse and contributing to better interface stability.
  • Reusing Design Logic: Reusing the logical process of design, not just specific design instances, can also lead to significant improvements in design cycle time.

This section provides a deeper understanding of the underlying principles governing batch size optimization and the key factors that enable successful batch size reduction.

Managing Batch Size

This section provides practical principles and tactics for actively managing batch size reduction in product development processes, focusing on identifying opportunities and overcoming common obstacles.

  • Identifying Batch Size Problems: Training oneself to recognize where batch size issues occur is the first step in addressing them.
  • The Principle of Transport Batches: The most important batch to focus on is the transport batch (the size of the transfer between process steps), as it directly affects cycle time and feedback speed.
  • Production vs. Transport Batches: Production batch size (work done uninterrupted in a single setup) and transport batch size are distinct decisions with different economic drivers.
  • Small Transport Batches are Key: Even with large production batch sizes, small transport batches can accelerate feedback and enable overlapping activities.
  • Example of Transport Batches (Internet): Packet switching (small transport batches) enables rapid message transmission even with large file sizes.
  • The Proximity Principle: Physical proximity (colocation) is a powerful enabler of small transport batches by reducing the transaction cost of communication and facilitating face-to-face interactions.
  • Face-to-Face Communication Benefits: Colocation enables real-time, high-bandwidth communication, accelerating feedback and improving team cohesion.
  • Dispersed Teams and Large Batches: Physically dispersed teams tend to use large-batch, asynchronous communication, losing many of the benefits of smaller batches.
  • Proximity and Flexibility: Proximity promotes flexibility by making it easier for workers to assist adjacent stations and move with the work.
  • The Run Length Principle: Short run lengths (introducing products more frequently with fewer new features) reduce queues and provide faster market feedback.
  • Scheduling Variance Accumulation: Shorter run lengths allow for interleaving tasks to prevent variances from accumulating.
  • The Infrastructure Principle: Investments in infrastructure (like test automation and testbeds) are essential for enabling small batches by reducing transaction costs and decoupling dependencies between batches.
  • System Test Limitations: Relying solely on system-level testing, while seemingly low investment and high validity, occurs on the critical path and delays feedback, increasing the cost of changes.
  • Justifying Infrastructure Investment: Investing in infrastructure for small batch processes (like subsystem testbeds) must be justified using an economic framework that accounts for cycle time benefits.
  • The Principle of Batch Content: Deciding which work content goes into each batch is important for improving economics, particularly by sequencing activities that add the most value for the least cost first.
  • Example of Batch Content (Perpetual Motion Car): Sequencing the high-risk, low-cost component first minimizes the amount of accumulated investment exposed to the risk of failure.
  • Sequencing Based on Value and Cost: Activities with high benefit-to-cost ratios should be sequenced first to maximize value creation for minimum investment.
  • Sequencing for Different Economic Drivers: Sequencing decisions should vary depending on whether cost of delay or manufacturing cost is the dominant economic driver.
  • The Batch Size First Principle: When attacking queues, it is generally best to start with batch size reduction rather than adding capacity at bottlenecks, as batch size reduction is cheaper and massively reduces queues by reducing variability.
  • Bottleneck Seduction: Bottlenecks are often the focus of improvement efforts because their impact is intuitively appealing, but addressing batch size first is usually more effective.
  • Stochastic and Mobile Bottlenecks: Product development bottlenecks are often temporary and mobile, making it less effective to add capacity at a perceived bottleneck compared to reducing batch size.
  • Example of Batch Size First (Boy Scout Hike): Breaking a hiking group into smaller batches is often more effective than simply having the slowest hiker lead a single large batch.
  • The Principle of Dynamic Batch Size: Batch size can be adjusted dynamically to respond to changing economic conditions, such as higher holding costs near the end of a project or lower value of feedback as defect rates drop.
  • Batch Size Varies with Time: Unlike manufacturing, where economic factors for batch size are relatively static, product development’s economic factors (holding costs, fixed costs) change continuously, justifying dynamic batch size adjustments.

This section provides actionable guidance and examples for implementing and managing batch size reduction efforts.

Applying WIP Constraints

This chapter introduces the powerful concept of Work-in-Process (WIP) constraints as a key tool for controlling queue size, improving flow predictability, and forcing rate-matching between process steps. It draws parallels with WIP management in both manufacturing (Kanban) and telecommunications networks.

The Economic Logic of WIP Control

This section explains the fundamental rationale behind using WIP constraints to manage queues and cycle time, highlighting the statistical and economic benefits of limiting the amount of work in the system.

  • WIP Constraints and Cycle Time: WIP constraints set upper limits on in-process inventory, which, by Little’s Formula, directly control cycle time.
  • Shift in Mindset: Effectively using WIP constraints requires shifting focus from controlling cycle time directly to controlling WIP, a leading indicator.
  • WIP Measurement is Lacking: A significant majority of product developers do not measure WIP, highlighting a major area for improvement.
  • The Principle of WIP Constraints: Constraining WIP controls cycle time and flow by preventing queues from exceeding a certain limit (M/M/1/k queueing system).
  • Trade-offs of WIP Constraints: WIP constraints reduce average cycle time but can lead to blocking (rejecting new work) and underutilization (lost capacity) if set too low.
  • Economic Benefits of WIP Constraints: Even relatively light WIP constraints can produce significant cycle-time savings with minimal costs of underutilization and blocking, resulting in a net economic benefit.
  • Cost-Benefit Analysis: A sample cost-benefit analysis shows that the economic benefits of a moderate WIP constraint can significantly outweigh its costs.
  • The Principle of Rate-Matching: WIP constraints are an effective way to force rate-matching between adjacent processes by limiting the size of the WIP pool between them.
  • Internet Flow Control: Packet-switching networks use WIP constraints (window size) to force the sender to match the receiver’s rate, enabling reliable communication between systems with vastly different speeds.
  • The Principle of Global Constraints: Global WIP constraints (like in Theory of Constraints) limit the total WIP upstream of a stable bottleneck, but they don’t control interprocess WIP pools and can lead to WIP starvation when the bottleneck capacity fluctuates.
  • Limitations of TOC Global Constraints: TOC focuses only on the primary bottleneck and doesn’t react to temporary bottlenecks elsewhere until they affect the primary bottleneck, and it can lead to uneven WIP distribution after bottleneck fluctuations.
  • The Principle of Local Constraints: Constraining local WIP pools (like in the Kanban system) is often more effective for managing stochastic bottlenecks.
  • Kanban System as Local WIP Constraint: Kanban uses physical or visual signals to limit the amount of WIP between process steps, controlling cycle time for the entire system.
  • Kanban Reacts to Emergent Bottlenecks: The Kanban system automatically throttles upstream processes when a bottleneck emerges anywhere in the system.
  • Stability and Smooth Flow: When capacity is regained, the Kanban system enables smooth flow because WIP is distributed throughout the system, preventing starvation.
  • Faster Feedback in Kanban: Kanban provides faster feedback on capacity changes than TOC or traditional planning systems.
  • Combining WIP Constraints and Flexibility: Combining WIP constraints with flexible resources (like cross-trained workers) enhances system effectiveness.
  • The Batch Size Decoupling Principle: Using WIP ranges (allowing the WIP pool to fluctuate between a limit and zero) decouples the batch sizes and timing of adjacent processes, allowing for economically optimal batch sizes in each process.

This section provides the economic and technical foundation for understanding the power and effectiveness of WIP constraints.

Reacting to Emergent Queues

This section outlines a variety of demand-focused and supply-focused interventions that can be taken when WIP reaches its upper limit, providing practical strategies for controlling emerging queues.

  • Real Art of Queue Management: Effective queue management involves not just monitoring but also having preplanned actions to take when WIP reaches its limits.
  • Demand-Focused Approaches: These approaches reduce WIP by limiting new work or shedding existing work.
  • The Principle of Demand Blocking: The simplest response is to block all demand when WIP reaches its upper limit, either by ejecting the job or holding it in an upstream queue.
  • Queue Shifting: Holding work in an upstream queue shifts the location of the queue and can be beneficial if different queues have different holding costs.
  • The Principle of WIP Purging: When WIP is high, purging low-value projects is logical because the economic cost of holding WIP is higher during congestion. This frees up capacity for high-value jobs.
  • Zombie Projects: Companies often fail to kill low-value projects, creating “zombies” that destroy flow.
  • The Principle of Flexible Requirements: A smaller batch approach to WIP purging is to shed or relax requirements during periods of congestion. The economic cost of a requirement rises when it blocks a larger queue.
  • Preplanning Flexible Requirements: Identifying requirements that can be dropped or relaxed in advance allows for better decision-making during crises and enables product architecture that facilitates shedding them.
  • Supply-Focused Approaches: These approaches reduce WIP by increasing the rate at which work is cleared from the system.
  • The Principle of Resource Pulling: Quickly applying extra resources to an emerging queue (bottleneck) is crucial because queues grow faster than they shrink.
  • Small Amount of Resource, Large Impact: Even small amounts of added, even inefficient, resources can significantly impact queues on the steep section of the queueing curve.
  • The Principle of Part-Time Resources: Part-time resources loaded to less than full utilization can be quickly shifted to respond to emerging queues, providing valuable surge capacity.
  • Part-Time Resources for High Variability Tasks: Part-time resources are particularly valuable for high-variability tasks on the critical path that are prone to congestion.
  • The Big Gun Principle: Pulling high-powered experts (scarce, highly productive resources) to emerging bottlenecks can quickly break their back.
  • Underutilizing Big Guns: To be available for crises, big guns should be loaded to less than full utilization during normal periods.
  • The Principle of T-Shaped Resources: Developing people with deep expertise in one area and broad knowledge in many (T-shaped resources) creates ideal resources for controlling emerging queues.
  • Systematic Development of T-Shaped Resources: Creating T-shaped resources requires hiring people with the potential, providing assignments to broaden their skills, investing in training, and structuring incentives.
  • The Principle of Skill Overlap: Cross-training resources at adjacent processes is particularly useful because the upstream resource is the first to receive a throttling signal from a downstream WIP constraint.
  • Example of Skill Overlap (Software Development): Cross-training programmers to do testing allows them to work on the testing queue during periods of congestion.
  • Mix Changing Approach: This approach controls emergent queues by selectively holding back work that would make the queue worse.
  • The Mix Change Principle: Using upstream mix changes to regulate queue size involves prioritizing work in upstream processes based on how it affects downstream queues.
  • Example of Mix Change (Mechanical Analysis): Prioritizing design tasks with high design-time and low testing-time when the testing queue is high (and vice versa) uses negative covariance to manage queues.

This section provides a rich set of practical strategies for intervening when WIP levels become too high.

WIP Constraints in Practice

This section delves into slightly more subtle aspects of WIP control, introducing advanced strategies used in telecommunications networks and emphasizing the importance of making WIP visible and recognizing the cumulative effect of small reductions.

  • Aging Analysis: Monitoring the distribution of time-in-queue (aging analysis) identifies outliers (jobs stuck in the queue for a long time) that may be experiencing unexpected problems and doing disproportionate damage.
  • The Escalation Principle: Creating a preplanned escalation process for outliers (jobs that exceed a certain time limit in queue) brings them to the attention of higher organizational levels or automatically raises their priority.
  • Rule-Based Escalation: Automating priority escalation based on time-in-queue is used in some computer operating systems.
  • Economic Basis for Escalation: Escalating priority should ideally be based on the economic cost of delay, reflecting the potential for disproportionate damage from long delays.
  • The Principle of Progressive Throttling: Increasing throttling of demand as WIP approaches the upper limit (rather than only blocking at the limit) leads to smoother transitions in flow and leaves more margin to handle bursts of demand (like RED/REM on the Internet).
  • Progressive Throttling in Product Development: Approval criteria for new projects can be made more demanding as congestion increases, or congestion pricing can be used for shared resources.
  • The Principle of Differential Service: Differentiating quality of service by workstream (categorizing jobs by cost of delay) allows for better service for high-value jobs within a shared resource pool.
  • Capacity and WIP Allocation by Category: Allocating both capacity and setting WIP limits for different categories of work provides differentiated flow-through times.
  • Example of Differential Service (Software Maintenance): Allowing one “hot” job with head-of-the-line privileges in software maintenance queues provides very fast flow-through for that job.
  • The Principle of Adaptive WIP Constraints: Adjusting WIP constraints dynamically as capacity changes (like window size on the Internet) allows the system to operate more efficiently and maintain cycle-time goals in the face of unpredictable fluctuations.
  • AIMD on the Internet: The Additive Increase Multiplicative Decrease (AIMD) algorithm is used to adjust sending rates based on congestion signals, ensuring system stability.
  • Adaptive WIP in Product Development: Permitting more WIP when flow-through is fast (and less when it’s slow) based on observed processing rates helps maintain cycle-time goals.
  • The Expansion Control Principle: Preventing uncontrolled expansion of work is necessary because some tasks can consume unlimited time and resources, blocking the system.
  • Timing Out Tasks: Setting a time limit on how long any job can run (like in computer operating systems) prevents poorly behaved jobs from locking up the system.
  • Minimum Acceptable Progress Rate: Terminating a task when it reaches the point of diminishing returns (where further investment does not yield sufficient incremental improvement) is another way to control expansion.
  • The Principle of the Critical Queue: Constraining WIP in the section of the system where the queue is most expensive minimizes overall queue cost.
  • Example of Critical Queue (Air Traffic Control): Holding inventory is least expensive on the ground, so air traffic control systems limit departures to congested destinations to avoid expensive holding patterns.
  • Example of Critical Queue (Software Bugs): Limiting the number of active bug fixes and holding others in a ready queue keeps the WIP in the cheapest stage and ensures focus on important problems.
  • The Cumulative Reduction Principle: Achieving permanent WIP reductions can be done gradually by consistently having a small excess of departure rate over arrival rate.
  • Example of Cumulative Reduction (R&D Projects): Steadily measuring and reducing WIP can lead to order of magnitude reductions in cycle time over time.
  • The Principle of Visual WIP: Making WIP continuously visible is essential for managing it effectively. Physical artifacts (like sticky notes on whiteboards) can represent invisible inventory.
  • Whiteboard Benefits: Whiteboards make WIP visible, enforce WIP constraints, create synchronized daily interaction, promote interactive problem solving, and foster team ownership.
  • Computer Systems vs. Manual Systems: While computer systems can replicate whiteboard functions, they often lack the simple elegance and flexibility of manual systems in practice.

This section provides advanced principles and practical tactics for fine-tuning WIP control and making it an integral part of product development management.

Controlling Flow Under Uncertainty

This chapter explores advanced concepts and techniques for managing the flow of work through product development processes, particularly in the presence of uncertainty. It draws heavily on lessons from telecommunications networks, which are designed to be robust in variable environments.

Congestion

This section explains the phenomenon of congestion in systems, drawing parallels with traffic flow on highways and highlighting how high capacity utilization can lead to drops in throughput and system instability.

  • Congestion Defined: Congestion is a system condition characterized by high capacity utilization and low throughput, often resulting from a regenerative feedback loop.
  • Traffic Flow Analogy: Highway throughput is the product of traffic density and speed, creating a parabolic curve where throughput is low at both maximum density and maximum speed.
  • Unstable Operating Point: The low-speed, high-density operating point on the left side of the throughput curve is inherently unstable due to regenerative feedback.
  • Stable Operating Point: The high-speed, lower-density operating point on the right side of the throughput curve is inherently stable.
  • Regenerative Feedback in Product Development: Similar regenerative feedback occurs in product development, where delays and frustration lead to increased work being injected into the system, further reducing throughput.
  • The Principle of Congestion Collapse: Systems prone to congestion can experience a sudden and catastrophic drop in output when loading reaches a critical level.
  • Congestion on the Internet: Congestion collapse on the Internet occurred when high loads led to delayed packets, increased retransmissions, buffer overflows, and ultimately zero throughput despite 100% utilization.
  • Congestion in Factories: Factories with very high utilization can experience drops in output due to increased expediting and interruption of work flow.
  • The Peak Throughput Principle: For systems with a strong throughput peak, operating near the peak on the stable side of the curve maximizes throughput.
  • Control Variable: Controlling occupancy (number of jobs in the system) is often the easiest way to maintain the system at the desirable operating point, as demonstrated by stoplights on freeway on-ramps.
  • The Principle of Visible Congestion: Making congestion visible is essential for controlling it. Since product development inventory is invisible, proxies are needed.
  • Whiteboards and Sticky Notes: Visualizing WIP using physical artifacts like sticky notes on whiteboards can make congestion visible.
  • Forecast of Expected Flow Time: Providing users with a forecast of expected flow time (calculated from WIP and processing rate) is a more effective way to inform them of congestion than simply displaying queue size.
  • Disneyland Example: Disneyland informs guests of expected wait times rather than queue size.
  • The Principle of Congestion Pricing: Using pricing to reduce demand during congested periods aligns decision-makers with the cost of loading a congested process.
  • Time-Shifting Demand: Differentiated pricing (like peak vs. off-peak rates) encourages flexible customers to time-shift their demand, reducing congestion.

This section provides a framework for understanding and managing the problem of congestion in product development processes.

Cadence

This section explores the power of using a regular, predictable rhythm (cadence) within product development processes to transform unpredictable events into predictable ones, improve flow, and enable other beneficial practices.

  • Cadence Defined: Cadence is the use of a regular, predictable rhythm in a process, transforming unpredictable events into predictable ones.
  • The Principle of Periodic Resynchronization: Using a regular cadence limits the accumulation of variance in a sequential process by periodically resynchronizing events to the schedule.
  • Bus System Example: The bus system example illustrates how variability can accumulate without periodic resynchronization, leading to bunching and unpredictable wait times.
  • Resynchronization Prevents Accumulation: Resynchronizing to a schedule prevents the variance of one event from being coupled to subsequent events, isolating the variance.
  • Resynchronization to Center of Range: Resynchronization restores conditions to the center of the control range, not just within bounds, making high-queue states less probable.
  • System-Level Benefit: Periodic resynchronization is a system-level benefit that may conflict with optimizing local performance.
  • The Cadence Capacity Margin Principle: Sufficient capacity margin is necessary to enable adherence to a regular cadence in the presence of variability.
  • Economic Trade-off: The cost of capacity margin for cadence is traded against the economic damage done by the catastrophic accumulation of variances.
  • The Cadence Reliability Principle: Cadence makes waiting times lower and more predictable, allowing for better planning and reduced risk.
  • Variability Substitution: Cadence often involves substituting variability in a cheaper variable (like content) for variability in an expensive variable (like schedule).
  • Cadence vs. Asynchronous Processes: A regular product introduction cadence allows for predictable planning, unlike asynchronous processes where future launch dates are uncertain.
  • The Cadence Batch Size Enabling Principle: Cadence facilitates smaller batch sizes by enforcing a regular rhythm for work product movement and reducing coordination overhead.
  • Transaction Cost Reduction: A regular cadence makes activities automatic and routine, lowering transaction costs and making it economical to use smaller batches.
  • The Principle of Cadenced Meetings: Scheduling frequent meetings using a predictable cadence reduces coordination overhead and provides better response time than asynchronous “on-demand” meetings.
  • On-Demand Meeting Latency: On-demand meetings have inherent latency due to the time required to synchronize schedules.
  • Structuring Meetings: Using a subcadence within cadenced meetings (time boxing for specific issues) further reduces time investment and improves attendance.

This section establishes cadence as a powerful tool for improving flow predictability and enabling other beneficial practices.

Cadence in Action

This section provides numerous examples of how cadence can be applied in various aspects of product development processes, illustrating its practical value and versatility.

  • Product Introduction Cadence: Introducing products at regular intervals, often synchronized with trade shows or sales training.
  • Testing Cadence: Daily build-test cycles with fixed deadlines for code submission.
  • Prototyping Cycles: Building new prototypes on a regular schedule (e.g., once or twice a month).
  • New Product Portfolio Screening: Reviewing new product opportunities on a regular cadence.
  • Program Status: Reviewing program status at regular time intervals rather than based on milestone completion.
  • Resource Access: Using cadence to control access to shared resources (e.g., weekly test wafers, daily buyer presence in the team area).
  • Project Meetings: Holding project meetings on a regular weekly or daily cadence.
  • Design Reviews: Reviewing designs in small batches at a regular cadence (e.g., weekly drawing reviews).
  • Prototype Part Production: Allocating a specific time window on a production line for prototype parts on a regular schedule.
  • Supplier Visits: Periodically visiting suppliers on a regular cadence.
  • Coffee Breaks: Scheduled coffee breaks facilitating informal information exchange between teams (historical example).

These examples demonstrate that cadence is already present in some aspects of product development and can be extended to other areas.

Synchronization

This section explores the concept of synchronization, aligning multiple events in time, as another tool for improving flow. It highlights how synchronization can combine the benefits of large batches (scale economies) with the benefits of small batches (faster cycle time).

  • Synchronization Defined: Synchronization is causing multiple events to happen at the same time, either with or without a regular cadence.
  • Integration Points: Integration points in development processes synchronize the integration of multiple subsystems.
  • Value of Synchronization: Synchronization is valuable when there are economic advantages to processing multiple items simultaneously (e.g., scale economies, higher fidelity testing).
  • Synchronous vs. Asynchronous Circuits: Synchronous electronic circuits use a clock pulse to prevent timing errors from accumulating, while asynchronous circuits are faster but less stable.
  • The Synchronization Capacity Margin Principle: Sufficient capacity margin is required to enable synchronization, as the schedule is determined by the arrival of the most limiting item.
  • The Principle of Multiproject Synchronization: Synchronization can combine the scale economies of large batches with the cycle-time benefits of small batch sizes by synchronizing work from multiple projects.
  • Example of Multiproject Synchronization (Semiconductor Test Wafers): Combining design content from multiple projects on a shared test wafer achieves scale economies while providing faster feedback to individual projects.
  • Example of Multiproject Synchronization (Project Reviews): Reviewing multiple projects in the same session at a regular cadence increases meeting frequency and feedback speed while preserving scale economies.
  • The Principle of Cross-Functional Synchronization: Using synchronized events facilitates better cross-functional trade-offs and reduces processing time for complex tasks like engineering change review.
  • Example of Cross-Functional Synchronization (Engineering Change): Synchronizing engineering change reviews (having all reviewers meet at the same time) drastically reduces processing time and improves first-pass yield compared to an asynchronous process.
  • The Synchronization Queueing Principle: Synchronizing both the batch size and timing of adjacent processes can dramatically reduce queues without altering capacity utilization or decreasing batch size.
  • Eliminating Randomness: Synchronizing arrivals to capacity removes the randomness of arrivals in queueing equations, enabling queue reduction.
  • Example of Synchronization Queueing (Traffic Lights): Synchronized green lights create a moving wave of cars, reducing queues and improving flow.
  • The Harmonic Principle: Nesting processes with different cadences hierarchically as harmonic multiples of each other creates benefits similar to synchronizing batch sizes and timing.

This section positions synchronization as a valuable tool for coordinating activities and improving flow efficiency, particularly by combining the benefits of different batch sizes.

Sequencing Work

This section explores various strategies for sequencing work as it flows through product development processes, moving beyond the simple FIFO discipline and considering the economic implications of different prioritization approaches, particularly in the presence of uncertainty.

  • Sequencing Matters: How work is sequenced affects queue cost, particularly when jobs have different delay costs and task durations.
  • FIFO Limitations: FIFO (First-In, First-Out) is ideal for homogeneous work (same task duration, same cost of delay) but is rarely economically optimal in product development.
  • Economic Payoff of Sequencing: The payoff from thoughtful sequencing is highest when queue sizes are large.
  • Hospital Emergency Room Analogy: A hospital emergency room, with nonhomogeneous job durations and delay costs, provides a useful mental model for product development sequencing issues.
  • Triage: Triage (classifying incoming work by priority) is necessary when resources are insufficient to treat everyone quickly and there is enough information to make good sorting decisions.
  • The SJF Scheduling Principle: When delay costs are homogeneous, the preferred scheduling strategy is to do the Shortest Job First (SJF) to minimize total delay cost.
  • Convoy Effect: Processing the longest job first when delay costs are homogeneous creates a “convoy effect,” delaying many shorter jobs.
  • The HDCF Scheduling Principle: When job durations are homogeneous, it is best to do the High Cost of Delay First (HDCF) to minimize the economic impact of delays.
  • The WSJF Scheduling Principle: When both job durations and delay costs are not homogeneous, the best strategy is to use Weighted Shortest Job First (WSJF), prioritizing jobs based on their delay cost divided by their task duration.
  • WSJF Metric: Delay cost divided by task duration is a useful metric for setting priority in nonhomogeneous environments.
  • Common Prioritization Mistakes: Prioritizing solely on ROI, using FIFO, or using Minimum Slack Time First (MSTF) are common but often economically suboptimal approaches.
  • MSTF Limitations: MSTF minimizes the percentage of projects that miss their planned schedule but is not optimal when projects have different delay costs.
  • Sequencing and Resource Blocking: Sequencing decisions involve trading off the cost-of-delay savings from immediate service against the cost of blocking the resource for the job’s duration.
  • The Local Priority Principle: Priorities should be local to individual resources, based on both delay cost (global) and the time required at that resource (local), rather than being global project priorities applied universally.
  • The Round-Robin Principle: When task duration is unknown, time-sharing capacity using Round-Robin (RR) scheduling ensures that short jobs are completed faster than long jobs, even without prior knowledge of their length.
  • Infinite Loops: RR scheduling prevents jobs with potentially infinite duration from locking up the system.
  • Quantum Size: The key decision in RR scheduling is the size of the time slice (quantum). A heuristic is to clear 80% of jobs in a single quantum.
  • Example of RR Scheduling (Purchasing Agent): Having a purchasing agent available in the team area for a predictable time each day implements RR scheduling.
  • The Preemption Principle: Preempting (interrupting a job already in service) provides the fastest cycle time but is usually inefficient due to switching costs.
  • Head-of-the-Line Privileges: Giving head-of-the-line privileges is usually sufficient, as it eliminates queue time, which is typically the largest component of cycle time.
  • When to Consider Preemption: Preemption should only be considered when switching costs are very low and there are substantial sequence-dependent cost-of-delay savings.
  • The Principle of Work Matching: Using sequencing to match jobs to appropriate specialized resources improves efficiency.
  • Visibility is Key: Effective work matching requires visibility into both incoming work characteristics and resource availability.

This section provides a detailed exploration of economically grounded approaches to work sequencing and prioritization.

Managing the Development Network

This section proposes a paradigm shift from viewing product development as a linear process to managing it as a robust network, drawing inspiration from telecommunications networks and emphasizing tailored routing, flexible routing, and the importance of preplanned flexibility.

  • Network-Based Approach: Managing product development like a data communications network can lead to high flow in the presence of variability.
  • Telecommunications Network Design: Telecommunications networks are designed to be robust in the presence of variability, unlike manufacturing processes that focus on reducing variability.
  • Linear vs. Network Processes: Today’s linear product development processes are rigid and vulnerable to variability, unlike network-based approaches.
  • The Principle of Tailored Routing: The network should tailor the route and sequence of subprocesses for each task or project based on its specific needs and economic value.
  • Value Stream Limitation: The linear “value stream” model is insufficient for information-based product development.
  • Economic Advantage of Tailored Routing: Tailored routing avoids forcing all projects through low-value-added activities.
  • Standardizing Modules, Not Processes: The focus should be on standardizing the modular building blocks of the development network, not the top-level process map.
  • Example of Tailored Routing (Consumer Product Company): Using a modular process allows projects to select only the necessary modules (nodes) that add value.
  • The Principle of Flexible Routing: Work should be routed based on the current most economic route, which may change dynamically based on congestion and other factors.
  • Internet Flexible Routing: The Internet uses routing tables that consider instantaneous congestion levels to select the “lowest-cost” path for each packet.
  • Alternate Routes: Preplanned alternate routes around likely points of congestion are necessary for flexible routing.
  • Maintaining Alternate Routes: Alternate routes should be kept open and ready to use, even if it means maintaining a trickle flow through them as an insurance premium.
  • Location of Alternate Routes: Alternate routes are most beneficial around scarce or expensive resources for high-variability tasks (e.g., testing departments).
  • The Principle of Flexible Resources: Flexible resources (part-time, T-shaped, cross-trained) enable flexible routing and absorb variability.
  • Robustness Through Interchangeability: Designing resources to be interchangeable (like on the Internet) contributes to network robustness.
  • The Principle of Late Binding: Allocating tasks to specific resources later in the process (late binding) leads to smoother flow by using noise-free information about current conditions at the resource.
  • Early vs. Late Binding: Early binding increases the accumulation of random variances because future conditions are uncertain.
  • Counteracting Variance: Late binding allows for neutralizing accumulated variance by using observed conditions to make loading decisions.
  • The Principle of Local Transparency: Making tasks and resources visible to adjacent processes (local transparency) improves work matching and decision-making.
  • Whiteboards for Local Transparency: Whiteboards with sticky notes can effectively visualize local WIP and facilitate coordination between adjacent processes.
  • Automated Systems vs. Manual Systems: Automated systems often struggle to replicate the simple elegance and flexibility of manual whiteboard systems.
  • The Principle of Preplanned Flexibility: Flexibility is a result of advance choices and planning, not just a frame of mind.
  • Examples of Preplanned Flexibility: Identifying nonessential requirements in advance and structuring the architecture to facilitate shedding them, or investing in keeping backup resources informed.

This section presents a vision for future product development processes based on the principles of robust network design.

Correcting Two Misconceptions

This section addresses two common but flawed beliefs: that centralizing resources is always detrimental to response time and that bottleneck delays are solely determined by the bottleneck’s characteristics.

  • The Principle of Resource Centralization: Correctly managed, centralized resources can reduce queues by exploiting variability pooling when demand is variable, infrequent, large, or has significant economies of scale.
  • Centralization and Variability Pooling: Centralizing resources pools variable demand, allowing the same amount of resources to achieve more throughput and faster flow time compared to decentralized resources.
  • Military Example of Centralization: The Marine Corps centralizes heavy artillery and close air support to achieve concentration of force at the point of vulnerability.
  • Poor Management, Not Centralization: Poor response time from centralized resources is often due to how they are measured and managed (focus on efficiency vs. response time) rather than centralization itself.
  • The Principle of Flow Conditioning: Reducing variability in the process immediately upstream of a bottleneck (flow conditioning) can reduce the size and cost of the queue at the bottleneck.
  • Bottleneck Performance Factors: Queue time at a bottleneck is affected by the variability in both processing time and arrival rate (determined by the upstream process).
  • Flow Conditioning and Laminar Flow: In fluid mechanics, creating laminar flow upstream of a bottleneck reduces turbulence and increases throughput.

This section clarifies two common misunderstandings and offers a more nuanced perspective on resource centralization and bottleneck management.

Using Fast Feedback

This chapter explores the critical importance of fast feedback loops and control systems in product development, drawing on insights from economics and control engineering to explain their benefits, design principles, and human impact.

The Economic View of Control

This section establishes an economic perspective on control systems, arguing that their purpose is to efficiently influence overall economic outcomes by focusing on parameters with high economic influence and efficient control.

  • Purpose of Control: The purpose of controlling product development is to efficiently influence overall economic outcomes.
  • Efficient Influence: Effective control systems focus on proxy variables that have a strong transfer function to overall economic results and can be influenced efficiently.
  • The Principle of Maximum Economic Influence: Focus control efforts on project and process parameters that have the highest economic influence on profitability.
  • The Principle of Efficient Control: Select control variables that are both economically influential and can be controlled efficiently.
  • The Principle of Leading Indicators: Select control variables that predict future system behavior, enabling early interventions that are often more economically efficient.
  • Example of Leading Indicators (Task Start Times): Focusing on task start times as a leading indicator of completion times allows for early intervention when problems are easier and cheaper to address.
  • The Principle of Balanced Set Points: Establish control set points (tripwires) based on equal economic impact, ensuring that interventions are triggered for deviations of comparable economic significance across different parameters and projects.
  • Mistakes in Setting Set Points: Companies often make mistakes by focusing on controlling proxy variables, ranking them in priority order (instead of using transfer functions), and basing set points on absolute changes instead of economic impact.
  • The Moving Target Principle: Recognize when to pursue a dynamic goal (where the economically optimal state changes over time) rather than striving for conformance to a static plan.
  • Static vs. Dynamic Goals: Control systems for static goals focus on preventing deviations and closing gaps. Control systems for dynamic goals focus on quickly correcting deviations from a constantly changing optimum.
  • Military Fire Control Analogy: The military uses different missile types for static (building) and dynamic (maneuvering jet) targets, illustrating the need for different control systems.
  • The Exploitation Principle: Control systems must enable the exploitation of unplanned economic opportunities, not just the reduction of negative deviations.
  • Conformance vs. Exploitation: Blindly adhering to the original plan can lead to missing low-cost opportunities to differentiate products or improve performance.

This section provides an economic foundation for designing effective control systems in product development.

The Benefits of Fast Feedback

This section delves into the more subtle but significant benefits of fast feedback loops, beyond their role in adaptation, highlighting their ability to reduce queues and accelerate learning.

  • The Queue Reduction Principle of Feedback: Fast feedback enables a process to operate with smaller queues. The speed of the feedback loop sets a lower limit on the WIP needed to compensate for variability.
  • Internet Flow Control and RTT: On the Internet, the speed of end-to-end feedback (round-trip time or RTT) determines the minimum buffer size needed to compensate for variability before the feedback signal takes effect.
  • Faster Feedback, Less WIP: Accelerating feedback loops allows for designing processes with less WIP, which in turn reduces delay times, creating a regenerative cycle.
  • The Fast-Learning Principle: Fast feedback accelerates learning and increases the efficiency of information generation by compressing the time between cause and effect, reducing noise from extraneous signals.
  • Team New Zealand Example: Team New Zealand’s use of two boats to test design improvements simultaneously enabled faster and more efficient learning compared to the American team’s approach.
  • Investment in Superior Environment: Creating a superior development environment (like a second boat) may be necessary to extract the smaller signals that come with fast feedback.

This section emphasizes the critical role of fast feedback in improving both process efficiency and learning effectiveness.

Control System Design

This section focuses on the technical aspects of designing control systems, distinguishing between metrics and control systems, exploring the role of agility, batch size, noise, decision rules, and the locality of feedback.

  • Metrics vs. Control Systems: A metric is only the measurement portion of a control system; focusing solely on metrics ignores dynamic response and stability issues.
  • The Principle of Useless Measurement: Measuring something does not guarantee it will be done; a metric is only one part of a control system.
  • The First Agility Principle: We need less massive programs and sufficient resources to redirect them quickly (short turning radius and quick reaction time) to operate safely at higher speeds in an uncertain environment.
  • Agility Reduces Control Problem: Increased agility reduces the magnitude of the control problem by allowing for faster course corrections.
  • The Batch Size Principle of Feedback: Small batches yield fast feedback. At any given processing rate, smaller batches reach downstream processes faster.
  • Feedback Effects are Crucial: The feedback effects of batch size reduction are orders of magnitude more important in product development than in manufacturing.
  • Truncating Unsuccessful Paths: Fast feedback truncates unsuccessful paths quickly, reducing the cost of failure associated with risk-taking and improving payoff asymmetry.
  • The Signal to Noise Principle: To detect smaller signals generated by small steps (batches), the noise in the system must be reduced.
  • Reducing Noise: Systematically reducing external sources of noise (like using a test course with consistent winds) improves the ability to detect small performance changes.
  • The Second Decision Rule Principle: Control systems can be made faster and more efficient by controlling the economic logic behind a decision (using decision rules) rather than participating in the entire decision.
  • Control without Delay: Controlling the logic of decisions allows for control without delaying individual decisions.
  • Developing Decision-Making Capacity: Communicating the economic logic behind decisions helps develop the organization’s capacity to make sound economic choices at lower levels.
  • The Locality Principle of Feedback: Local feedback loops are inherently faster than global feedback loops. Control signals propagate more quickly in smaller systems.
  • Kanban vs. TOC (Feedback): The Kanban system’s local WIP constraints generate faster control signals than the global constraints of the TOC system, limiting the perturbation in flow to adjacent work centers.
  • Instability from Long Feedback Loops: Long feedback loops can lead to instability, as seen in the difficulty of steering a large ship due to the time lag between rudder application and course change observation.
  • Fast Local Feedback Prevents Accumulation: Fast local feedback loops prevent the accumulation of variance and improve stability.
  • The Relief Valve Principle: Having a clear, predetermined “relief valve” (an inexpensive measure of performance that can absorb variability) is a key part of control system design.
  • Example of Relief Valve (Feature Set): Using the feature set as a relief valve for schedule variation involves predetermining which features can be dropped when schedules deviate.
  • Returning to Center: When using a relief valve, it’s important to make a significant adjustment to return the system to the center of its control range, not just within bounds, to avoid repeated interventions.
  • The Principle of Multiple Control Loops: Embedding fast time-constant feedback loops inside slow ones allows for managing variation at different time scales.
  • Example of Multiple Control Loops (Sailing): Short-time-constant adjustments (easing helm, slacking sails) absorb small gusts, while longer-time-constant actions (reefing) respond to sustained wind increases.
  • Filtering Variation: Fast loops filter out short-term variation, enabling slow loops to respond to overall trends.
  • The Principle of Controlled Excursions: Effective control depends on preventing excursions into regions of instability, which can lead to regenerative destabilizing feedback.
  • Example of Controlled Excursions (Testing Queues): Allowing testing queues to grow beyond a certain point can lead to regenerative instability as engineers request extra tests, further increasing the queue.
  • WIP Constraints Prevent Excursions: WIP constraints help keep queues within a controlled range and prevent excursions into unstable regions.
  • The Feedforward Principle: Providing advance notice of heavy arrival rates to downstream processes (feedforward) minimizes queues by reducing the randomness of the arrival rate.
  • Counteracting Variability: Feedforward information allows for counteracting variability with negative covariances.
  • Example of Feedforward (Fast-Food Restaurant): Fast-food restaurants anticipate demand and build extra inventory before lunch hour to avoid queues.
  • Feeding Forward High Impact Information: Feedforward should prioritize information with the greatest economic impact, such as the arrival rate of work in processes prone to congestion or information that significantly reduces uncertainty.

This section provides a comprehensive overview of control system design principles and their application in product development.

The Human Side of Feedback

This section explores how the human element interacts with feedback loops and control systems, highlighting the psychological impact of fast and slow feedback and the importance of colocation, empowerment, and trust.

  • Human Element in Control Loop: Human behavior affects the overall behavior of the system when humans are part of the control loop.
  • The Principle of Colocation: Colocation (physical proximity) is a powerful way to create fast feedback in human systems by increasing face-to-face communication.
  • Colocation Benefits: Colocation enables real-time, high-bandwidth communication, reduces transaction costs for communication, and increases non-task-related interpersonal communication, which builds team cohesion and reduces negative biases.
  • The Empowerment Principle of Feedback: Fast feedback gives people a sense of control by allowing them to perceive the cause and effect relationships in their actions, leading to increased initiative and taking control of the system.
  • Delayed Feedback and Victimhood: Slow feedback weakens the perceived relationship between actions and consequences, making people feel like victims of a monolithic system and hindering initiative.
  • The Hurry-Up-and-Wait Principle: Large queues and slow feedback make it hard to create urgency, as the time lag between completing work and the next activity starting is too long.
  • Expediting and Lack of Urgency: When expediting is common, there is less incentive to complete jobs early, as priority is based on imminent deadlines.
  • Small Queues and Urgency: Small queues and fast feedback reinforce the relationship between early completion and early service, making it easier to promote an atmosphere of urgency.
  • The Amplification Principle: The human element tends to amplify large excursions from the desired state. Small deviations can motivate conformance, while large deviations can lead to disengagement and destabilization.
  • Example of Amplification (Project Delays): Engineers may allocate less time to severely delayed projects and focus on less delayed ones to avoid being blamed for overall project failure, amplifying the delay on the problem project.
  • Preventing Large Excursions: Keeping systems operating within a controlled range prevents excursions into regions where human behavior can lead to regenerative destabilization.
  • The Principle of Overlapping Measurement: To align behaviors and encourage collaboration, reward people for the work and results of others, even if those results are outside their primary control.
  • Example of Overlapping Measurement (Professional Service Firm): Compensating partners based on personal, local office, and firm-wide results creates a supportive structure.
  • Balancing Overlap: The amount of overlap in measurement needs to be carefully balanced to avoid losing focus on individual responsibilities.
  • The Attention Principle: Time is a scarce and valuable resource. Allocating personal time to a program or project is the most effective way to communicate its importance to the organization.

This section highlights the crucial role of human behavior in the effectiveness of feedback loops and control systems.

Metrics for Flow-Based Development

This section translates the principles of Flow-Based Product Development into actionable metrics that organizations can use to monitor and improve their processes, focusing on queues, batch size, cadence, and flexibility rather than traditional metrics like efficiency and conformance.

  • Metrics Aligned with Causality: Metrics should be based on the causal relationships that drive economic success, as identified in the principles of this book.
  • Challenging Traditional Beliefs: The metrics recommended challenge traditional beliefs that emphasize efficiency and conformance to static plans.
  • Focus on Queues: Since in-process inventory (queues) is centrally important to economic performance, measuring queues and the factors that cause them is essential.
  • Flow Metrics:
    • DIP Turns: Ratio of DIP to revenue, measuring the overall efficiency of flow.
    • Average Flow Time: Calculated from DIP turns, analogous to days receivable outstanding.
    • Differential Service Metrics: DIP turns and flow times for individual workstreams when differentiating quality of service.
    • Process-Specific Metrics: WIP turns and flow times for individual process stages to highlight congestion.
  • Inventory and Queue Metrics:
    • Total DIP: Measuring total in-process inventory, including work in queue, is a useful proxy for queue size.
    • Queue Size: Measuring the number of items in queue is a simple and effective starting point.
    • Estimating Work in Queue: Classifying jobs into simple categories to estimate the amount of work in the queue.
    • Process-Specific Queues: Measuring queues associated with particular processes.
    • Forecast Flow Time: Calculating forecast flow time using Little’s Formula.
    • Aging Analysis: Monitoring the distribution of time-in-queue to identify outliers.
    • Financial Value of Queues: Quantifying the financial cost of queues by multiplying queue time by cost of delay.
  • Batch Size Metrics: Measuring the batch sizes used in processes, trends in batch size, and transaction costs.
  • Cadence Metrics: Monitoring the number of processes using cadence and trends in conformance to cadence.
  • Capacity Utilization Metrics: Capacity utilization is a better metric for long-term planning than for day-to-day control due to estimation difficulty.
  • Feedback Speed Metrics: Measuring average decision cycle time, outliers in decision cycle time, and aging analysis on unresolved problems.
  • Flexibility Metrics: Measuring the breadth of skill sets (T-shaped people), the number of multipurpose resources, and the number of congested processes with alternate routes.

This section provides a practical set of metrics to support the implementation and management of Flow-Based Product Development principles.

Achieving Decentralized Control

This chapter delves into the concept of decentralized control, drawing heavily on lessons from military doctrine, particularly the U.S. Marine Corps, to illustrate how to balance centralized and decentralized approaches, maintain alignment in the presence of uncertainty, and leverage the human element.

How Warfare Works

This section provides a brief overview of military concepts, particularly the balance between offense and defense, the role of mass and economy of force, and the distinction between attrition warfare and maneuver warfare, setting the stage for applying these ideas to product development.

  • Offense vs. Defense: Military strategy involves balancing offensive actions (centralized control for massing force) and defensive reactions (decentralized control for rapid response).
  • Mass and Economy of Force: Principles of war emphasizing concentrating force at the point of attack and allocating minimum resources elsewhere.
  • Attrition Warfare vs. Maneuver Warfare: Attrition warfare (positional) emphasizes grinding down the enemy. Maneuver warfare (dynamic) emphasizes surprise, movement, and exploiting uncertainty.
  • U.S. Marine Corps and Maneuver Warfare: The Marine Corps exemplifies modern maneuver warfare, focusing on fighting outnumbered with limited initial support and adapting to the fluid battlefield.
  • Planning in Product Development vs. Military: Unlike product developers who often strive for conformance to a static plan, the modern military views plans as a baseline for adaptation to unforeseen circumstances.

This introduction provides a framework for understanding the military context from which valuable lessons for product development can be drawn.

Balancing Centralization and Decentralization

This section explores the need for a balanced approach to control, combining the advantages of both centralization and decentralization, and offers principles for determining when each approach is most appropriate.

  • Balanced Approach: The goal is not 100% decentralization but decentralized execution supported by centralized coordination.
  • The Second Perishability Principle: Decentralize control for problems and opportunities that age poorly (perishable), requiring rapid response. Pre-position resources and authority at the lowest levels to deal with such issues quickly.
  • Example of Perishability (Fire): Fires require rapid, decentralized response when small to prevent them from becoming large.
  • Perishable Opportunities: Missing fleeting opportunities can be costly.
  • The Scale Principle: Centralize control for problems that are infrequent, large, or have significant economies of scale, where massed response is necessary.
  • Example of Centralization (Fire Trucks): Large fires require centralized resources (fire trucks) that can be moved to the point of need, though this increases response time.
  • The Principle of Layered Control: Adapt the control approach to emerging information about the problem using a layered system.
  • Triage: Use triage for quick sorting of problems based on initial severity when resources are limited (like in mass casualty situations).
  • Escalation: For problems where initial severity is unknown, use an escalation process (like in computer operating systems) to automatically raise the priority of jobs that wait too long in a lower priority queue.
  • Combining Triage and Escalation: A combined approach uses triage for initial sorting and escalation for those jobs that cannot be immediately classified or resolved.
  • The Opportunistic Principle: Adjust the plan for unplanned obstacles and opportunities. The original plan is based on imperfect data and the enemy (market, technology) is actively changing.
  • Bypassing Obstacles: It is often better to bypass unexpected obstacles than to overpower them, particularly if they make the original plan economically unsound.
  • Exploiting Opportunities: Control systems must enable the exploitation of unexpected opportunities that emerge during development.
  • The Principle of Virtual Centralization: Create the ability to quickly reorganize decentralized resources to create centralized power when infrequent, large demands occur.
  • Example of Virtual Centralization (Naval Firefighting): Naval ships train all sailors to fight fires and organize them into damage control parties during a crisis, creating a centralized force from decentralized resources.
  • Preplanned System: Virtual centralization requires a preplanned system, training, and pre-positioned equipment.
  • Civilian Firefighting Example: Mutual support between fire departments in adjacent communities provides virtual centralized capacity.
  • Tiger Teams: Assembling experienced individuals into “tiger teams” to address severe program crises is a product development example of virtual centralization.
  • The Inefficiency Principle: The inefficiency of decentralization can be less costly than the value of faster response time, justifying paying the price for responsiveness when it is highly valuable.
  • CPR Example: Training individuals in CPR is a costly investment in decentralized resources justified by the value of rapid response in a time-critical situation.

This section provides principles for strategically balancing centralized and decentralized control based on the nature of problems and opportunities.

Military Lessons on Maintaining Alignment

This section explores how sophisticated military organizations, particularly the U.S. Marine Corps, maintain alignment and coordination despite emphasizing decentralized control and adapting to uncertainty, drawing on principles like mission orders, clear boundaries, main effort, agility, peer-level coordination, flexible plans, tactical reserves, and early contact.

  • Alignment in Uncertainty: Maintaining alignment is challenging with decentralized control but is crucial for effectiveness, as seen in the strength of the Roman army’s coordination.
  • The Principle of Alignment: There is more value created with overall alignment than local excellence. Focusing resources on a few key attributes rather than achieving modest advantages in all areas creates disproportionate effects.
  • The Principle of Mission: Using a clear mission, particularly the “why” (Commander’s Intent), as a tool for maintaining alignment in uncertain environments. Mission orders focus on the desired end state and purpose rather than prescribing how the mission is accomplished.
  • Commander’s Intent: Understanding the commander’s intent enables decentralized units to select the right course of action when facing unexpected obstacles or opportunities.
  • The Principle of Boundaries: Establishing clear roles and boundaries for individual units is essential for coordination and preventing “blue-on-blue” attacks (friendly fire) in a fluid battlefield.
  • Product Development Roles: Clear role definition in product development teams increases efficiency and reduces the need for excessive communication and meetings.
  • Avoiding White Space: Organizations need to be alert for gaps (“white space”) between roles that nobody feels responsible for.
  • The Main Effort Principle: Designating a “main effort” (the point of maximum focus) aligns resources and subordinates other activities to achieve a concentrated impact, whether at a point of weakness or the enemy’s center of gravity.
  • Main Effort in Product Development: Identifying a small set of preference-shifting features as the main effort aligns the team’s focus and subordinates other features.
  • The Principle of Dynamic Alignment: The main effort may shift quickly in the course of battle (or product development) as conditions change and new opportunities or weaknesses emerge.
  • Adapting to Change: The ability to dynamically reallocate resources and shift focus based on emerging information is crucial for achieving alignment in a dynamic environment.
  • The Second Agility Principle: Developing the ability to quickly shift focus (agility) is critical in maneuver warfare and product development. Agility comes from small, less massive units and sufficient resources to redirect them quickly.
  • OODA Loop: Colonel John Boyd’s OODA (Observe, Orient, Decide, Act) loop concept highlights the importance of time-competitive decision cycles and rapid transitions.
  • Practice and Training: The ability to make quick transitions comes from constant practice and training.
  • Architecture and Change: Product architecture can enable rapid changes by partitioning the design to absorb change gracefully (e.g., isolating uncertainty).
  • The Principle of Peer-Level Coordination: Maintaining alignment with decentralized control is achieved through explicit and implicit lateral communications between peers, not solely through hierarchical command.
  • Face-to-Face and Voice Communication: Explicit lateral communication relies on face-to-face and voice communication due to their higher bandwidth and real-time nature.
  • Doctrine and Training: Implicit lateral communication comes from shared doctrine and extensive training, enabling units to predict how other units will act.
  • Maneuver Warfare vs. Traditional Hierarchy: The maneuver warfare approach emphasizes lateral communication more than traditional hierarchical military structures.
  • Colocated Teams and Lateral Communication: Colocated teams facilitate continuous peer-to-peer communication, which is more effective in responding to uncertainty than centralized project management.
  • The Principle of Flexible Plans: Using simple, modular, and flexible plans (with preplanned branches and sequels) allows for maintaining alignment when conditions change unexpectedly.
  • Planning for Adaptation: Planning should focus on creating a flexible framework for adaptation rather than a rigid blueprint for conformance.
  • The Principle of Tactical Reserves: Decentralizing a portion of reserves at different organizational levels provides sufficient capacity margin to deal instantly with a certain range of contingency at each level, enabling quick realignment.
  • Layered Reserves: Layering reserves provides the speed of local reserves with the efficiency of centralized reserves.
  • Product Development Equivalent: Tactical reserves in product development are capacity margin pre-positioned at various organizational levels, enabling support groups to absorb local variation.
  • The Principle of Early Contact: Making early and meaningful contact with the problem (enemy forces, market risk, technical risk) reduces uncertainty and prevents it from generating, enabling better planning and decision-making.
  • Proof-of-Concept and Market Feedback: Quick proof-of-concept and early market feedback are essential for resolving risks rapidly.

This section provides a rich set of principles and examples from military doctrine on how to achieve effective decentralized control and maintain alignment in dynamic and uncertain environments.

The Technology of Decentralization

This section discusses four technical factors that support and enable decentralized control, focusing on the need for decentralized information, accelerating decision-making speed, measuring response time, and leveraging internal and external markets.

  • Technical Factors for Decentralization: Decentralized control is supported by technical capabilities related to information dissemination, decision speed, measurement, and market mechanisms.
  • The Principle of Decentralized Information: Decentralizing control requires decentralizing both the authority to make decisions and the key information needed to make those decisions correctly.
  • Commander’s Intent and Information: Understanding the intentions of commanders two levels higher requires widely disseminating this information.
  • Boeing 777 Decision Rule Example: Disseminating the system-level economic decision rule for weight vs. cost allowed individual engineers to make correct local decisions.
  • The Frequency Response Principle: The ability of a control system (or organization) to respond to rapidly changing signals (transients) is limited by its frequency response. Accelerating decision-making speed increases frequency response.
  • Accelerating Decision-Making: Accelerating decision-making involves reducing the number of people and management layers involved and enabling lower organizational levels to make decisions through authority, information, and practice.
  • The Quality of Service Principle: When response time is important, it should be measured and used as a key metric for support groups, aligning incentives with the desired outcome.
  • Example of Quality of Service (Tooling Shop): Changing a tooling shop manager’s metric from efficiency to how long tooling was on the critical path improved response time.
  • Quality of Service Agreements: Establishing quality of service agreements explicitly states the expected response time from support groups.
  • The Second Market Principle: Using internal and external markets can decentralize control more effectively than centralized allocation, particularly when different service levels are priced differently.
  • Pricing and Demand Control: Pricing manages demand for scarce resources, allowing users to make self-serving decisions aligned with overall economics (e.g., paying premiums for express delivery).
  • Pricing for Internal Resources: Applying differential pricing to internal support groups can decentralize control and reduce decision delays compared to centralized allocation.
  • Example of Pricing (Coffee Cups): Using limited tokens (project-branded coffee cups) to buy priority in queues is an example of using a market mechanism for internal resource allocation.

This section highlights the technical underpinnings that enable effective decentralized control in product development.

The Human Side of Decentralization

This final section explores the human dimensions of decentralized control, emphasizing the importance of cultivating initiative, leveraging face-to-face communication, and building trust within the organization.

  • Human Dimensions: Decentralized control relies on human factors like initiative, communication, and trust.
  • The Principle of Regenerative Initiative: Cultivating initiative is crucial for effective decentralized control. Encouraging initiative provides positive reinforcement and makes people more willing to take action.
  • Inaction vs. Bad Decision: In a dynamic environment, inaction and lack of decisiveness are often more dangerous than making an imperfect decision quickly.
  • Stifling Initiative: Leaders should avoid stifling initiative, as it is a critical quality for decentralized control.
  • The Principle of Face-to-Face Communication: Leveraging the speed and bandwidth of face-to-face communication is essential for maintaining alignment with decentralized control.
  • Colocation Benefits: Colocation increases face-to-face communication, accelerating feedback and building team cohesion.
  • Partial Colocation: Various forms of partial colocation (colocating part of the team or part of the time) can provide benefits.
  • Verbal vs. Textual Communication: Verbal communication has higher bandwidth and generates more rapid feedback than textual alternatives like email.
  • The Trust Principle: Decentralized control is based on trust, both hierarchical and lateral. Trust is built through experience and predictability of behavior.
  • Predictability of Behavior: Effective collaboration is possible when people can predict each other’s behavior, even if they have different attitudes and values.
  • Maintaining Team Continuity: Maintaining continuity in organizational units allows members to train and work together, building trust through shared experience.
  • Small Batches Build Trust: Moving to small batch size activities increases the number of learning cycles and opportunities to predict the behavior of other organizational members, building trust.

This section emphasizes the critical human factors that enable successful implementation of decentralized control.

Conclusion

The conclusion synthesizes the key arguments of the book, reiterating the importance of challenging traditional orthodoxies, embracing an economic view, and adopting the principles of Flow-Based Product Development to navigate uncertainty and achieve superior performance.

  • Challenging Orthodoxy: The book challenges the prevalent, but flawed, traditional approaches to product development that hinder flow and economic performance.
  • Economic View is Essential: Viewing product development through an economic lens is fundamental to making sound decisions and identifying true opportunities for improvement.
  • Importance of Queues: Queues are a major, often invisible, source of waste that must be actively managed.
  • Variability as an Opportunity: Variability is not inherently bad and can be exploited to create economic value, particularly in the presence of asymmetric payoffs.
  • Batch Size Reduction: Reducing batch size is a powerful lever for improving flow, reducing risk, and accelerating feedback.
  • WIP Constraints for Control: WIP constraints are effective for controlling queues, improving predictability, and forcing rate-matching.
  • Cadence and Synchronization: Cadence and synchronization are valuable tools for managing flow under uncertainty and enabling other beneficial practices.
  • Fast Feedback and Learning: Fast feedback loops are crucial for adaptation, reducing queues, and accelerating learning.
  • Decentralized Control: Decentralized control, balanced with centralized coordination, is essential for adapting to uncertainty and seizing perishable opportunities.
  • Key Takeaways: The core lessons involve understanding and quantifying economics, managing queues, embracing variability, reducing batch size, using WIP constraints, implementing cadence and synchronization, accelerating feedback, and achieving decentralized control.
  • Actionable Next Steps: Readers are encouraged to begin implementing these principles in small batches, measure their impact, and continuously learn and adapt their approach.
HowToes Avatar

Published by

Leave a Reply

Recent posts

View all posts →

Discover more from HowToes

Subscribe now to keep reading and get access to the full archive.

Continue reading

Join thousands of product leaders and innovators.

Build products users rave about. Receive concise summaries and actionable insights distilled from 200+ top books on product development, innovation, and leadership.

No thanks, I'll keep reading