
Unlocking business value in the age of software: a summary of ‘Project to Product’
Quick orientation
“Project to product,” by Dr. Mik Kersten, addresses a critical challenge for modern enterprises: how to survive and thrive in the age of software. Kersten argues that traditional project-oriented management, a relic of past industrial revolutions, is failing organizations in today’s fast-paced, software-driven world. The book introduces the flow framework, a new management paradigm designed to help businesses shift from managing IT as a cost center to nurturing software delivery as a value-generating product engine.
This summary will guide you through Kersten’s core arguments and the practical components of the flow framework. You’ll gain simple, clear explanations of every key idea, from understanding technological revolutions to implementing value stream networks, enabling you to see how these concepts can transform your organization’s approach to software delivery.
Introduction: the turning point
The introduction establishes the critical context: we are at a “turning point” in the age of software, a period where businesses must adapt to new means of production—software—or risk obsolescence. Traditional management frameworks are inadequate for this new reality, leading to failed digital transformations despite significant investment.
The current crisis
The book begins by highlighting the inadequacy of old management models in the face of digital disruption.
- Technological revolutions: History shows a pattern of technological revolutions (e.g., industrial revolution, age of steel, age of mass production) occurring roughly every 50 years, each with an “installation period” (new tech explosion) and a “deployment period” (production capital takes over), separated by a “turning point.”
- Age of software: We are currently in the age of software, passing through its turning point. This means companies must master software delivery or face decline, similar to how companies in previous eras had to master steam power or assembly lines.
- Failed transformations: Many large-scale agile and devops transformations are failing because they focus on activities (e.g., number of people trained) rather than business outcomes. They often try to apply management principles from previous revolutions to software, which is fundamentally different.
- The problem: The core issue is a disconnect between business leadership (who often manage IT as projects and cost centers) and technology delivery, leading to inefficiencies and an inability to compete with software-native companies.
- The author’s motivation: Kersten shares his frustration witnessing massive investments in transformations (like at Nokia and a large bank) go to waste due to flawed approaches, motivating him to develop a new framework.
- The flow framework’s promise: This framework aims to bridge the gap between business and technology by focusing on the flow of business value through product-oriented value streams, measured by “flow metrics.”
- Value stream networks: These are presented as the new infrastructure for innovation, allowing real-time measurement of software delivery investments and their connection to business outcomes.
- Urgency: Tech giants are mastering traditional businesses faster than traditional businesses are mastering software, making this shift critical for survival and a more balanced economic future.
The introduction powerfully argues that a fundamental shift in management thinking is necessary to navigate the current technological revolution successfully.
Part i: the flow framework
Chapter 1: the age of software
This chapter explores the pervasive impact of digital disruption across all economic sectors and explains why current business approaches are insufficient. It revisits Carlota Perez’s model of technological revolutions to frame the current “age of software” and introduces the author’s “three epiphanies” that led to the flow framework.
Understanding digital disruption
No industry is safe from digital disruption, as software becomes a core differentiator and means of production.
- Pervasive impact: Software is disrupting every economic sector, from resource extraction (primary) and manufacturing (secondary) to services (tertiary) and knowledge work (quaternary). Even established industries like automotive are seeing software become a dominant cost and innovation driver (e.g., Tesla’s market cap vs. Ford’s).
- Types of disruption (Moore): Geoffrey Moore’s three types of disruption help categorize threats:
- Infrastructure model: Changes how customers access products (e.g., social media marketing).
- Operating model: Changes the consumer-business relationship (e.g., mobile banking displacing agents).
- Business model: Fundamental software/tech application to change the business itself (e.g., Uber).
- Unbundling industries: Startups, fueled by venture capital, are “unbundling” established industries like finance by targeting specific services with superior software-driven experiences. Tech giants also expand into new markets.
- Technological revolutions (Perez): Carlota Perez’s model describes 50-year cycles:
- Installation period: Financial capital fuels a “Cambrian explosion” of startups using new tech.
- Turning point: A period of crashes and recovery where businesses master new production means or decline. We are currently in this phase.
- Deployment period: Production capital from new industrial giants dominates; startups look to the next revolution.
- Significance of the deployment period: Understanding we are moving towards the deployment period means companies that haven’t mastered software delivery will struggle to survive as production capital takes over. The urgency to adapt is high.
The author’s three epiphanies
Kersten’s personal journey and research revealed fundamental flaws in how software delivery is understood and managed.
- Epiphany 1 (productivity decline): Productivity declines and waste increases as software scales due to disconnects between the architecture and the value stream. Kersten’s own RSI experience and developer studies showed much time was lost navigating and refinding information, not coding value.
- Epiphany 2 (disconnected value streams): Disconnected software value streams are the bottleneck to software productivity at scale, caused by misapplying the project management model. This thrashing isn’t unique to developers but affects all specialists (analysts, testers, ops) due to tool and process silos.
- Epiphany 3 (value streams as networks): Software value streams are not linear manufacturing processes but complex collaboration networks needing alignment to products. Observations at the BMW plant, contrasted with software delivery realities, highlighted this.
- Common thread: These epiphanies point to the misapplication of concepts from previous technological revolutions to the unique challenges of software.
- Focus shift: Improvements in technology (languages, tools) yield diminishing returns compared to fixing the disconnect between business and IT and within IT itself.
- BMW plant insight: The BMW Leipzig plant, a pinnacle of mass production, showcases how a business can be designed around value flow, a stark contrast to typical enterprise IT.
This chapter emphasizes that mastering software delivery isn’t just about better tech, but a fundamental shift in management, driven by understanding our place in the current technological revolution.
Chapter 2: from project to product
This chapter contrasts the failing project-oriented approach with a more effective product-oriented one, using cautionary tales from Nokia and “LargeBank” and insights from manufacturing successes like BMW and Boeing. It argues that a shift from project to product thinking is essential for connecting software delivery to business value.
Failures of project-oriented transformations
Large-scale transformations often fail when they are managed as projects focused on cost-cutting or activity metrics rather than business outcomes and value flow.
- Nokia’s agile failure: Nokia’s massive agile transformation was measured by activities (e.g., teams trained, “Nokia Test” scores) but failed to address core platform issues (Symbian OS architecture) and downstream deployment bottlenecks. Developers struggled, but these crucial feedbacks didn’t reach the business, leading to a focus on local optimization rather than the end-to-end value stream.
- LargeBank’s $1b devops failure: A major financial institution spent $1 billion on its third IT transformation attempt, managed as a project focused solely on cost reduction. IT and digital initiatives were disconnected, and success was measured by adherence to project timelines and cost targets, not by increased business value delivery. This led to reduced productivity and talent retention issues.
- The cost center trap: Treating IT as a cost center incentivizes cost reduction above all else, leading to less value for less money, rather than more value for less. This hinders the ability to compete and innovate.
- Resisting proxies (Bezos): Relying on proxy metrics (e.g., deploys per day, teams trained) instead of outcome-based metrics (e.g., revenue, customer satisfaction) leads to a focus on process over results, as highlighted by Jeff Bezos.
- Local optimization: Both Nokia and LargeBank suffered from local optimization – improving one part of the value stream (e.g., agile teams, deployment automation) without addressing the overall system’s bottlenecks or connecting efforts to business goals.
Learning from product-oriented successes
Companies that master their means of production, like Boeing and BMW, demonstrate the power of product-centric thinking and long-term value stream management.
- Boeing’s product thinking: Boeing manages aircraft development (like the 777 and 787 Dreamliner) as profit centers, considering the entire product lifecycle (decades of production and maintenance). The decision to rewrite brake software for the 787 due to lack of traceability, despite delays, showcased a long-term view on maintainability and cost. This requires connecting engineering/IT intimately with business decisions.
- BMW’s adaptable value streams: The BMW Leipzig plant illustrates product thinking by tailoring production lines (e.g., high-volume 1 & 2-Series vs. innovative, adaptable i3/i8 lines) to different business goals and market uncertainties. Profitability and market fit drive the value stream architecture, not vice-versa.
- Product development flow (Reinertsen): Donald Reinertsen’s work emphasizes measuring for life cycle profits, not just short-term project goals or proxy variables.
- Zone management (Moore): Geoffrey Moore’s model (Productivity, Performance, Incubation, Transformation Zones) helps tailor metrics and investment strategies to different product stages. Using only Productivity Zone metrics (cost reduction) for all IT is a common mistake.
Shifting from project to product
The chapter concludes by detailing the fundamental differences between project-oriented and product-oriented management, urging a shift to the latter for success in the age of software.
- Budgeting: Projects have fixed, milestone-based funding, incentivizing large upfront requests. Products fund value streams based on business results, with iterative budget allocation.
- Time frames: Projects have defined end dates, neglecting long-term health. Products consider the entire lifecycle, including ongoing maintenance and evolution.
- Success: Projects focus on on-time, on-budget delivery (cost center). Products focus on business objectives and outcomes (profit center).
- Risk: Projects front-load risk by forcing early specification. Products spread risk through iteration and allow for pivots.
- Teams: Projects “bring people to the work” (temporary, often spanning multiple projects). Products “bring work to the people” (stable, cross-functional teams on one value stream).
- Prioritization: Projects are plan-driven (waterfall). Products are roadmap and hypothesis-driven (agile).
- Visibility: Projects often make IT a black box. Products offer direct mapping to business outcomes, enabling transparency.
- Origin of project thinking: Project management (e.g., Gantt charts) emerged from the age of steel, optimized for predictable work, ill-suited for the creative and uncertain nature of software. Taylorism’s view of workers as fungible resources is detrimental to knowledge work.
This chapter makes a compelling case that adopting a product-oriented mindset, focused on long-term value streams and business outcomes, is crucial for overcoming the limitations of traditional project management in software delivery.
Chapter 3: introducing the flow framework
This chapter formally introduces the flow framework, a new management approach designed to bridge the gap between business strategy and technology delivery. It defines key concepts like value streams, software flow, and the four fundamental “flow items” that represent all work in a software value stream.
The need for a new framework
Existing methodologies (agile, devops, business reengineering frameworks) are valuable but often disconnected. The flow framework aims to connect business-level initiatives with technical ones.
- Bridging the gap: The flow framework provides a layer to connect business strategy with technology delivery, making the “black box” of IT transparent.
- Scaling devops principles: It aims to scale the three ways of devops (flow, feedback, continual learning) beyond IT to the entire business.
- Outcome-oriented metrics: It emphasizes measuring business outcomes, not just activities or proxy metrics like “how agile” a team is.
- Addressing fundamental questions: Most enterprise IT organizations struggle to answer basic production questions: Who is the customer? What value are they pulling? What are the value streams? Where is the bottleneck? The flow framework provides a path to answer these.
- Lean thinking required: The framework is built on lean principles: specify value by product, identify the value stream, make value flow, let customer pull, pursue perfection.
Defining value streams and flow
Central to the framework is the concept of value streams and how work (value) flows through them.
- Value stream: The end-to-end set of activities performed to deliver value to a customer through a product or service. This includes everyone and everything from ideation to delivery and feedback. Agile or devops teams are typically segments of a value stream.
- Software flow: The activities involved in producing business value along a software value stream. This is what needs to be measured end-to-end.
- Flow framework structure: It has two main parts:
- Value stream metrics: Track each value stream and correlate production metrics to business outcomes.
- Value stream network: The infrastructure to measure results delivered by each product.
- Focus on results: The goal is to align all delivery activities around software products and track the business results to create a feedback loop.
The four flow items
To measure and manage flow, all work within a software value stream is categorized into one of four “flow items.” These items are mutually exclusive and collectively exhaustive (MECE).
- Features (new business value): Deliver new value to the customer, driving business results. Pulled by customers. Examples: epic, user story, requirement.
- Defects (quality): Fix quality problems affecting the customer experience. Pulled by customers. Examples: bug, problem, incident.
- Risks (security, governance, compliance): Address security, privacy, and compliance exposures. Pulled internally by security/risk officers. Examples: vulnerability, regulatory requirement.
- Debts (removal of impediments): Improve software and operational architecture to enable future delivery (e.g., refactoring, infrastructure automation). Pulled by architects/technical teams. Examples: API addition, refactoring.
- Business relevance: These flow items provide a common language for business and technology stakeholders to discuss priorities and trade-offs. For example, prioritizing only features can lead to an accumulation of defects, risks, and debts, eventually crippling future feature delivery.
- Architecture shaping: The flow of these items should shape the software architecture, not the other way around. Architecture work (e.g., “enablers” in SAFe) can fall under any of the four flow items depending on its purpose.
- Value stream network as a product: Improvements to the value stream network itself are treated as work on a product, often falling under debt reduction for the team responsible for the network.
This chapter lays the conceptual foundation of the flow framework, providing the essential definitions for understanding how to measure and manage software delivery as a flow of business value.
Part ii: value stream metrics
Chapter 4: capturing flow metrics
This chapter delves into the specific metrics used to measure and understand the flow of business value within software value streams. It details five key flow metrics: flow distribution, flow velocity, flow time, flow load, and flow efficiency, explaining how each provides insights for optimizing delivery.
Understanding flow distribution
Flow distribution is the proportion of effort allocated to each of the four flow items (features, defects, risks, debts) within a value stream over a specific period.
- Strategic allocation: It allows tailoring value streams to business needs and product maturity. A new product might heavily prioritize features, while a legacy system might focus on defects and risks.
- Zone alignment: Flow distribution can be aligned with investment zones (e.g., Moore’s Incubation, Transformation, Performance, Productivity Zones).
- Dynamic adjustment: It’s not static; it evolves. For instance, post-launch, a product might need to shift focus from features to defects and paying down technical debt incurred during the push to release.
- Tasktop Hub example: Kersten shares how his company, Tasktop, tracked flow distribution for their “Hub” product. An initial intense feature focus led to accumulated technical debt, which later slowed feature delivery until the distribution was rebalanced. This highlights the need for business-level understanding of these trade-offs.
- Zero-sum game: Flow distribution forces explicit trade-offs. More features might mean fewer defect fixes or less risk mitigation in a given period. This makes prioritization transparent to the business.
- Like-sized work items: For balanced visualization, flow items should ideally represent roughly similar amounts of effort. If not, weighting or careful interpretation is needed.
Measuring flow velocity
Flow velocity measures the number of flow items completed in a given time, providing an empirical measure of a value stream’s productivity.
- Output measurement: It’s adapted from agile velocity (story points per sprint) but applied to the four flow items completed (e.g., 10 features and 5 risks completed in a release means a flow velocity of 15).
- Simpler than agile velocity: It doesn’t rely on estimation of size or scope for each item at the business level, though such estimation remains useful for development teams.
- Productivity indicator: It supersedes proxy metrics like lines of code or deploys per day for measuring overall value stream productivity from a business perspective.
- Customer-centric: Flow items are tied to business value (defined by product managers/business analysts), making velocity a measure of value delivered.
- Law of large numbers: Assumes that over many items, the average size/effort per item evens out, making aggregate velocity a useful trend indicator within a value stream. Cross-value stream comparisons require caution due to potential differences in item granularity.
Tracking flow time
Flow time measures the duration it takes for a flow item to go from being accepted into the value stream to being delivered to the customer.
- End-to-end duration: It’s the total “wall clock” time, including active work and wait times, from when work starts on an item (e.g., feature scheduled, defect investigation begins) until it’s “done” (released).
- Distinction from lead/cycle time:
- Lead time (traditional): Often from customer request to delivery (can be very long if backlogs are large).
- Cycle time: Time for a specific step within the process.
- Flow time (Flow Framework): Specifically from work accepted/started to done.
- Flow states: Based on four generic states (new, active, waiting, done) mapped from various tool-specific workflows.
- Business relevance: A key metric for understanding time-to-market and responsiveness. Different flow items or value streams might have different target flow times (e.g., hours for critical defects, weeks for routine features).
- Non-linear paths: Software flow isn’t always linear; critical items can be fast-tracked, affecting their flow time.
Monitoring flow load
Flow load is the number of flow items actively being worked on (in “active” or “waiting” states) within a value stream at a given time, essentially measuring work in progress (WIP).
- WIP indicator: High flow load (excessive WIP) can indicate overutilization, leading to queues, context switching, and ultimately reduced flow velocity and increased flow time.
- Leading indicator of problems: Tracking flow load helps identify when a value stream is taking on too much parallel work, which can negatively impact output.
- Optimizing utilization: The goal isn’t 100% resource utilization, which is detrimental (as per Reinertsen and Goldratt), but finding the optimal flow load that maximizes velocity and minimizes flow time. This level may vary by value stream.
Assessing flow efficiency
Flow efficiency measures the proportion of time flow items are actively being worked on compared to the total time they spend in the value stream (flow time).
- Waste identification: Low flow efficiency indicates significant “wait states,” where items are idle due to dependencies, queues, or bottlenecks.
- Formula: (Active work time / Total flow time) x 100%.
- Improving productivity: A low flow efficiency signals opportunities to remove waste and improve the overall speed of delivery by addressing the causes of waiting.
These five flow metrics provide a comprehensive dashboard for understanding and managing the health and performance of software value streams from a business perspective.
Chapter 5: connecting to business results
This chapter explains how to connect the flow metrics (discussed in chapter 4) to tangible business outcomes. It proposes tracking four key categories of business results for each product value stream: value, cost, quality, and happiness, thereby creating a feedback loop that links IT investment to business performance.
The importance of business outcomes
Merely measuring flow isn’t enough; this flow must be correlated with the achievement of specific business goals to ensure IT efforts are creating real impact.
- Beyond flow metrics: While flow metrics show how work is progressing, business results metrics show what that work is achieving for the business.
- Feedback loop: Connecting flow metrics to business results creates a powerful feedback loop, enabling continuous learning and data-driven decision-making (the second way of devops).
- Per value stream: Crucially, these business results must be tracked for each product value stream, not just at an organizational or project level. This is key to the “project to product” shift.
Four categories of business results
The flow framework advocates for tracking metrics in these four areas for every value stream.
- Value: This measures the benefit the value stream produces for the business.
- Examples: Revenue (overall, monthly recurring, annual contract), monthly active users, customer satisfaction (NPS), sales pipeline growth.
- Tracking: Requires financial and CRM systems to be configured to attribute value to specific products/value streams.
- Internal products: For internal products (e.g., a billing system, a developer platform), value can be indirect (e.g., adoption rate by other revenue-generating value streams, cost savings enabled).
- Cost: This includes all costs associated with delivering a particular product’s value stream.
- Examples: Staff costs (internal, contractors), license costs, infrastructure costs (internal, hosted), proportion of shared service costs.
- Challenges with project accounting: Traditional project-based accounting makes it hard to accurately assign costs to long-lived product value streams, especially when staff are split across multiple projects. Product-oriented costing is necessary.
- Life cycle profit: Measuring both value and cost per value stream enables the calculation of life cycle profit (as advocated by Reinertsen).
- Quality: This measures the quality of the product produced by the value stream, as perceived by the customer.
- Examples: Escaped defects, number of incidents, support ticket counts, renewal/expansion rates, Net Promoter Score (NPS).
- Customer-visible focus: The flow framework prioritizes customer-visible quality metrics. Internal metrics (e.g., change success rate) are important leading indicators but are considered a level down.
- Trade-off visibility: Tracking quality per value stream makes quality trade-offs (e.g., sacrificing quality for speed) visible and their consequences measurable.
- Happiness: This measures the engagement and satisfaction of the staff working on the value stream.
- Importance: Happy, engaged staff are more creative and productive, especially in knowledge work like software delivery (linking to Daniel Pink’s work on autonomy, mastery, and purpose).
- Examples: Employee Net Promoter Score (eNPS), employee engagement survey results.
- Per value stream, not just department: Kersten emphasizes the importance of measuring happiness within each value stream, as departmental averages can mask issues specific to a product team or initiative. This was a lesson learned at Tasktop.
- Leading indicator: Low happiness can indicate underlying problems in a value stream, such as excessive technical debt, poor tooling, or process friction.
Value stream dashboards
Combining flow metrics with these business results for each value stream creates a powerful dashboard for both technical and business stakeholders.
- Shared visibility: Provides a common language and view for discussing performance, trade-offs, and investment decisions.
- Data-driven decisions: Allows teams and leadership to see, for example, if increased feature flow is actually leading to higher revenue, or if high technical debt is impacting quality and team happiness.
- Dynamic system optimization: Enables organizations to understand the specific dynamics of their value streams (e.g., how flow load impacts velocity for this product) and optimize accordingly, rather than applying generic best practices blindly.
- Identifying broader issues: If feature flow is high but business value isn’t materializing, it might indicate problems outside the IT value stream (e.g., in sales/marketing, or product/market fit).
This chapter provides the crucial link between the mechanics of software delivery (flow metrics) and the strategic goals of the business, ensuring that IT efforts are demonstrably contributing to success.
Chapter 6: tracking disruptions
This chapter uses the lens of the flow framework and its metrics to analyze several real-world examples of company successes and failures in the age of software. It illustrates how an imbalance in prioritizing flow items (features, defects, risks, debts) and a lack of visibility into their impact on business results can lead to severe consequences.
Automotive software: defects vs. features
The automotive industry’s increasing reliance on software highlights the tension between delivering new features and ensuring quality.
- Rising software complexity: Cars have evolved from millions to potentially billions of lines of code with infotainment, connectivity, and autonomous driving.
- Increased recalls: Software-related recalls in vehicles have significantly increased, indicating a potential over-prioritization of feature flow at the expense of defect prevention/resolution or paying down technical debt.
- Flow distribution imbalance: The data suggests that the automotive industry may need to shift its flow distribution to focus more on defects and debts until quality stabilizes, similar to how manufacturing quality was mastered in the previous age.
- Value streams for quality: Sophisticated manufacturers like BMW create dedicated value streams for simulation and testing to embed quality, even if it impacts near-term feature velocity.
Equifax: ignoring risks
The massive 2017 Equifax data breach exemplifies the catastrophic outcome of de-prioritizing risk-related work.
- The breach: 145.5 million consumer accounts were compromised due to a known, unpatched vulnerability.
- Leadership disconnect: The CEO’s attempt to blame a single developer highlighted a profound misunderstanding of systemic software risk management.
- Flow distribution failure: It’s hypothesized that Equifax’s value streams over-allocated resources to features or other items, neglecting the critical flow of risk mitigation work.
- Business-level prioritization: Had leadership understood flow distribution, they might have mandated a company-wide focus on risk and debt reduction, potentially averting the disaster. The flow framework makes such strategic allocations visible and manageable.
Nokia: the burden of debts
Nokia’s decline in the mobile phone market illustrates how unaddressed technical debt can cripple a company’s ability to innovate.
- Symbian OS: Nokia’s Symbian OS, once dominant, accumulated massive technical debt. This made it exceedingly difficult and slow to add new features (like an app store) required to compete with iOS and Android.
- The “burning platform”: CEO Stephen Elop’s famous memo acknowledged the dire situation, hinting at the platform’s inability to evolve.
- Invisible debt: The business leadership, likely unfamiliar with the concept and impact of large-scale technical debt in software, didn’t prioritize its reduction until it was too late.
- Flow framework perspective: The feature flow was choked by debt. A strategic decision to invest heavily in debt reduction or replatforming (like Apple did with Mac OS X) was needed much earlier. Lack of visibility into debt as a flow item prevented this.
Microsoft: navigating with strategic flow
Microsoft’s journey through the age of software provides examples of successfully managing flow distribution to address strategic challenges.
- Product-oriented from the start: Microsoft’s leadership, often with software engineering backgrounds (Gates, Nadella), inherently understood product lifecycles and technical trade-offs.
- Pivoting to the internet (1995): Bill Gates redirected company-wide flow distribution to prioritize internet-centric features (e.g., Internet Explorer) to compete with Netscape, accepting temporary quality issues and debt accumulation.
- Trustworthy computing initiative (2002): Recognizing the risks from security vulnerabilities and system instability (“blue screen of death,” “DLL hell”), Gates again shifted focus, this time prioritizing risk reduction and debt paydown across all value streams, even before major breaches became common headlines.
- Innate understanding: Leaders like Gates didn’t need the flow framework explicitly because they had an intuitive grasp of these dynamics. The framework aims to make this understanding accessible to all business leaders.
This chapter powerfully demonstrates that managing the flow of features, defects, risks, and debts is not just an IT concern but a critical business strategy. Visibility into this flow and its connection to business outcomes is essential for navigating disruptions.
Part iii: value stream networks
Chapter 7: the ground truth of enterprise tool networks
This chapter explores the often-invisible reality of how software is actually built within large organizations. It emphasizes the need to understand the “ground truth” of developer activity and tool usage to identify the real bottlenecks, leading to Kersten’s first two epiphanies about productivity loss and the impact of disconnected value streams.
Seeking the ground truth
Just as a “gemba walk” in manufacturing allows managers to see work as it happens, understanding software delivery requires observing the flow of work through the tools where it’s performed.
- BMW plant analogy: The visibility of car production at the BMW plant (e.g., the paint shop bottleneck being physically visible) contrasts sharply with the opacity of software work.
- Tools as ground truth: In software, tool repositories (version control, issue trackers, CI/CD systems) contain the data that represents the actual work being done. This data, if properly analyzed, can make the “invisible” work visible.
- Author’s first gemba walk (self-analysis): Kersten’s experience with Repetitive Strain Injury (RSI) forced him to meticulously track his own coding activity. He discovered that a majority of his mouse clicks (the source of pain) were spent navigating and re-finding information, not writing code. This indicated a disconnect between the task at hand and the information needed.
The first epiphany: productivity declines with scale and disconnects
Kersten’s research revealed that as software systems grow, developer productivity often decreases due to a widening gap between the software’s architecture and the flow of work items (features, defects).
- Developer studies: Extending his self-analysis, Kersten studied professional developers at IBM and later a larger group using his open-source tool, Eclipse Mylyn.
- Context switching: Developers constantly switched tasks due to interruptions (urgent features, defects), losing context and time.
- Architecture vs. value stream: The core problem wasn’t just code complexity, but a mismatch between how the code was structured (architecture) and the nature of the work arriving (value stream). Even with good architectural practices, finding and modifying all relevant code for a given task was inefficient.
- Mylyn’s purpose: The Mylyn tool was created to automatically manage this task context, linking development activity directly to value stream artifacts (like tickets), significantly reducing wasted navigation and improving productivity.
- Epiphany 1 restated: Software productivity declines and thrashing increases as software scales, due to disconnects between the architecture and the value stream.
The second epiphany: disconnected value streams are the enterprise bottleneck
Working with larger enterprises like “FinCo,” Kersten realized the developer productivity problem was a symptom of a much larger, systemic issue: disconnected value streams across all IT specializations.
- Beyond developers: The problem of wasted effort and information silos wasn’t limited to developers. Testers, business analysts, operations staff, and project managers all suffered from similar disconnects, often due to using different, unintegrated tools.
- Duplicate data entry: At FinCo, thousands of IT staff spent significant time manually re-entering information between tools (e.g., from developer IDEs to project management tools), causing errors and delays.
- Misapplication of project management: The root cause was often the enterprise’s attempt to manage dynamic, iterative software work using rigid, project-based models and disconnected toolchains. This forced manual handoffs and status reporting, hindering flow and feedback.
- Operational disconnects: Lack of deployment automation and orchestration further fragmented the value stream, disconnecting not just software architecture but also operational infrastructure from the value stream.
- Epiphany 2 restated: Disconnected software value streams are the bottlenecks to software productivity at scale. These value-stream disconnects are caused by the misapplication of the project management model.
This chapter underscores that to improve software delivery, organizations must first understand and then address the fundamental disconnects in their tool networks and processes that hinder the smooth flow of value.
Chapter 8: specialized tools and the value stream
This chapter examines why enterprise tool networks are inherently complex and heterogeneous. It discusses the drivers behind tool proliferation and presents findings from a study of 308 enterprise tool networks, concluding that specialized tools are here to stay and effective integration is crucial.
The rise of tool specialization
The increasing complexity of software delivery has led to a demand for tools tailored to specific roles and tasks, moving away from monolithic, one-size-fits-all solutions.
- Division of labor: As software development scaled, roles became more specialized (e.g., product managers, requirements managers, developers, testers, SREs, support staff). Each specialization sought tools optimized for their specific workflows.
- Example: FinCo: The author’s experience at FinCo revealed that different teams (e.g., Java vs. .NET developers) often used different best-of-breed tools suited to their platforms, contributing to tool diversity.
- Fundamental vs. accidental complexity:
- Fundamental complexity: Heterogeneity that improves value flow by supporting specialized needs (e.g., different tools for Java vs. .NET).
- Accidental complexity: Heterogeneity that doesn’t improve value flow (e.g., redundant tools from mergers or uncoordinated purchases). This is a form of value stream debt.
- Drivers of fundamental complexity:
- Stakeholder specialization: Different disciplines need different tools (e.g., support SLAs vs. developer code reviews).
- Scale specialization: Lightweight tools for small teams vs. robust tools for large, regulated systems.
- Platform specialization: Tools optimized for specific development/cloud platforms (e.g., Microsoft VSTS for Azure).
- Zone specialization: Simpler tools for Incubation Zone products vs. more integrated tools for Performance Zone products.
- Legacy: Cost/disruption of replacing old tools can be prohibitive.
- Supplier diversity: Outsourcing partners and open-source projects often use their own tools.
- Dimensions of scale: Organizational scale isn’t just company size; it involves the number of features, products, partners, markets, and platforms, each adding complexity.
Disconnects in the value stream
This proliferation of specialized tools, without proper integration, leads to fragmented value streams and significant inefficiencies.
- Information silos: Work items (features, defects) often live in multiple, disconnected tools. This forces manual data re-entry, status updates, and report generation, leading to errors, delays, and lost information.
- Example: Security vulnerabilities: At FinCo, developers received spreadsheets of vulnerabilities from a downstream tool and had to manually enter them into their issue tracker, a process ripe for error and delay.
- Lost or erroneous work: One automotive supplier found that up to 20% of requirements and defects were lost or contained errors due to manual handoffs before they integrated their value stream with the OEM.
Mining the ground truth: 308 enterprise tool networks
To understand this reality, Tasktop studied 308 enterprise tool networks, analyzing the tools used and how artifacts flow (or should flow) between them.
- Data source: Value Stream Integration diagrams created with IT administrators during Tasktop deployments. These capture tools, artifact types, and their relationships.
- Study demographics: A broad range of industries, with 28% from the Fortune Global 500.
- Key findings:
- Tool diversity: 55 different tools were reported across these organizations.
- Dominant tool types: Agile Planning and Application Lifecycle Management (ALM) tools were most common, but IT Service Management (ITSM), Project Portfolio Management (PPM), and Requirements Management tools were also significant.
- Artifact commonality: Despite tool diversity, artifact types like “defect,” “requirement,” and “user story” were common. However, each artifact type often spanned numerous tools.
- Multi-tool norm: Only 1.3% used a single tool. 69.3% had artifacts needing to flow across three or more tools, and over 42% used four or more tools for core processes.
- Legacy tools persist: Tools like IBM Rational DOORS (from the 1980s) are still integral for managing requirements in complex, long-lifecycle systems (e.g., aerospace, automotive), indicating that tool replacement is often not feasible.
Why heterogeneous tools are here to stay
The trend is towards more specialization, not consolidation into single-vendor suites, especially for large enterprises.
- Vendor specialization: The market supports a diverse ecosystem of vendors creating increasingly specialized tools (e.g., for GDPR compliance, product-oriented management).
- Tech giant exception: Tech giants like Google and Microsoft often build their own integrated tool networks (e.g., Microsoft’s VSTS). However, even their enterprise customers often use a mix of Microsoft tools and others (like Jira) due to platform diversity.
- Cost of internal platforms: Replicating tech giants’ internal tool platforms is prohibitively expensive for most enterprises.
- The medical analogy: Similar to medicine, where specialization led to progress but also information silos, software delivery benefits from specialized tools but suffers from lack of integration. This can lead to “medical errors” in software delivery – costly mistakes and inefficiencies.
This chapter establishes that diverse, specialized tool networks are an inescapable reality for most large enterprises. The challenge, therefore, is not to eliminate this diversity but to effectively integrate these tools to enable smooth value flow.
Chapter 9: value stream management
This chapter introduces the concept of Value Stream Networks and the models required to build and manage them. It explains how to move beyond the limitations of linear, manufacturing-style thinking for software delivery and embrace a network-based approach to achieve true end-to-end flow and visibility.
The third epiphany revisited: software value streams as networks
Kersten’s third epiphany, prompted by observing the limitations of manufacturing analogies, is that software value streams behave more like complex networks (e.g., airline routes) than linear production lines.
- Limitations of manufacturing analogies: While concepts like CI/CD pipelines resemble linear flows, the creative and adaptive nature of software design and development doesn’t fit neatly. Bottlenecks in software (e.g., a constrained UX team) often lead to workarounds and rerouting, not a complete halt as in a factory.
- Airline network analogy: Software delivery is more like an airline network, where flows can be rerouted around disruptions (e.g., bad weather). This contrasts with a manufacturing line where a single bottleneck can stop everything.
- Epiphany 3 restated: Software value streams are not linear manufacturing processes but complex collaboration networks that need to be aligned to products.
- Key differences from manufacturing:
- Variability: Manufacturing minimizes variability; software embraces it.
- Repeatability: Manufacturing maximizes same-widget throughput; software maximizes iteration and feedback on evolving widgets.
- Design frequency: Manufacturing designs upfront; software design is continuous and integral to the “production” process.
- Creativity: Manufacturing automates away creativity; software leverages automation to support creativity.
- Metcalfe’s Law: The value of a network increases with its connectedness. Disconnected value streams limit overall effectiveness.
The value stream network: three layers
The flow framework defines three interconnected layers to create a manageable and measurable Value Stream Network.
- Tool network (bottom layer): Comprises the actual software tools (Jira, ServiceNow, etc.) where work happens. Connectivity here is crucial.
- Connectivity index: Measures the ratio of integrated tools/repositories to unintegrated ones. Low connectivity means inaccurate flow metrics.
- Artifact network (middle layer): Consists of the instances of work items (defects, stories, etc.) and their relationships as they flow across tools. This is where the “ground truth” of work is visible.
- Traceability index: Measures the breadth and depth of connections between related artifacts. High traceability is vital for governance, compliance, and understanding dependencies. Aim for 100%.
- Value stream network (top layer): The business-level view, mapping tool and artifact network data to product-oriented value streams and flow metrics.
- Alignment index: Measures the proportion of work (artifact containers) connected to a defined product value stream versus remaining in project silos. High alignment means better visibility and product-oriented management.
Three models for connecting the network
These models bridge the layers, enabling data to flow from tools to business-level insights.
- Integration model: Defines how artifacts (e.g., a “defect”) and their fields (e.g., “priority,” “status”) map and synchronize between different tools in the tool network. This allows a defect originating in a service desk tool to flow to a development tool and back, maintaining consistency. It provides modularity to the tool network.
- Example: Mapping various tool-specific defect statuses like “Open,” “In Progress,” “Resolved” to a common set of states for consistent tracking.
- Activity model: Identifies specific activities performed in the value stream (e.g., “Design,” “Code,” “Test,” “Deploy”) and maps them to the concrete workflow states of artifacts defined in the integration model. It also maps these activities to the four generic flow states (New, Active, Waiting, Done), enabling consistent measurement of flow time and efficiency across all artifacts and value streams.
- Example: A “User Story” artifact might pass through activities like “Backlog Grooming,” “Sprint Planning,” “Development,” “QA Testing,” “User Acceptance Testing,” and “Release.”
- Product model: Maps the often technology-aligned structures within tools (e.g., projects in Jira representing software components) to business-aligned product value streams. This crucial step connects the technical work to the products that deliver business value, allowing flow metrics and business results to be tracked per product.
- Example: Aggregating work from several Jira projects that contribute to a single customer-facing product into one “Product X Value Stream.”
Value stream management and eliminating waste
A connected Value Stream Network allows organizations to identify and address common sources of waste (“time thieves” as per Dominica DeGrandis).
- Too much WIP: Flow load metrics make excessive WIP visible, allowing data-driven decisions to limit it and improve flow.
- Unknown dependencies: The artifact network can reveal architectural, expertise, or activity-based dependencies between teams or value streams, enabling proactive management.
- Unplanned work: Flow distribution makes the impact of unplanned work (e.g., urgent defect fixes) visible, allowing for adjustments and addressing root causes.
- Conflicting priorities: The flow framework forces explicit prioritization between flow items at a high level. The Product Model helps align work to business objectives.
- Neglected work: Debts (technical, infrastructure) are first-class flow items, ensuring they get prioritized. The alignment index helps identify and address “zombie projects.”
This chapter provides the blueprint for building the infrastructure needed to implement the flow framework, transforming disconnected tools and processes into a cohesive, measurable, and manageable Value Stream Network.
Conclusion: beyond the turning point
The conclusion reiterates the urgency for organizations to shift from project-oriented management to a product-centric approach using the flow framework. It emphasizes that this transformation is crucial for surviving the “turning point” of the age of software and thriving in the subsequent “deployment period” or “golden age.”
Navigating the future
The book ends with a call to action, highlighting the potential for a more prosperous future if businesses adapt.
- Historical perspective: Learning from past technological revolutions (like the age of mass production mastered by companies like BMW) and the mistakes of those who failed to adapt (like Xerox fumbling its PARC innovations) is vital.
- The flow framework as a solution: It offers a way to connect business and technology, manage software delivery as value-creating product portfolios, and gain the visibility needed to make strategic decisions.
- Social and corporate responsibility: Adapting is not just about corporate survival but also about ensuring a broader distribution of wealth and opportunity, preventing a future dominated by a few tech monopolies.
- Beyond the book: Implementing the flow framework requires customizing business results metrics, and it doesn’t prescribe specific design or strategy methods. It provides the how for managing delivery once the what (strategy) is defined. Technical details for value stream architects are also beyond its scope but are enabled by its concepts.
- Future potential: A connected value stream network creates a unified data model for software delivery, paving the way for AI-driven optimization and simulation of organizational changes.
- The choice: Organizations can either become “fossils” of a bygone era or evolve to thrive in the age of software. The shift to a product model, enabled by the flow framework, offers a path to the latter.
The conclusion reinforces that the principles and practices outlined in “project to product” are not just theoretical but a practical guide for businesses to transform their IT into a powerful engine for innovation and value creation.
Big-picture wrap-up
“Project to product” argues that the traditional project-based management of IT is obsolete in the age of software. To survive and thrive, organizations must shift to a product-oriented model, treating software delivery as a set of value streams focused on business outcomes. The flow framework provides the concepts, metrics, and infrastructure model (value stream networks) to make this transition, enabling visibility, feedback, and continuous improvement across the entire software lifecycle.
- Core takeaway: Successfully navigating the age of software requires a fundamental shift from managing IT projects to cultivating product value streams, measured by the flow of business value.
- Next action: Begin identifying one key product and map its end-to-end value stream, including all people, processes, and tools involved. This is the first step towards understanding your current state and where the flow framework can be applied.
- Strategic imperative: The flow framework isn’t just about IT efficiency; it’s about aligning technology delivery directly with business strategy and results, enabling faster adaptation and innovation.
- Mindset shift: Moving from cost-center thinking to profit-center (or value-center) thinking for software is crucial.
- The four flow items: Understanding and balancing the flow of features, defects, risks, and debts provides a powerful lens for strategic decision-making.
- Visibility is key: Without the ability to see how value flows (or doesn’t flow) through your systems, effective management and improvement are impossible. Value stream networks aim to provide this visibility.
- Long-term journey: This is not a quick fix but a transformational journey requiring commitment from both business and technology leadership.
- Reflective question: How much of your current IT investment and effort can you directly and clearly trace to tangible business outcomes, and where are the “black boxes” in your delivery process?





Leave a Reply