
Continuous Discovery Habits: Complete Summary of Teresa Torres’s Framework for Continuous Product Discovery
Introduction: What This Book Is About
Teresa Torres’s Continuous Discovery Habits introduces a structured and sustainable approach to product discovery, designed to help product trios consistently build products that customers love and that drive business value. The book addresses the common challenges product teams face, such as being output-focused rather than outcome-driven, struggling to understand customer needs, and failing to align stakeholders. By outlining a collection of habits, Torres provides a practical guide for product managers, designers, and software engineers to integrate continuous discovery into their daily work.
The core premise is to continuously engage with customers to uncover opportunities and validate solutions, moving away from infrequent, project-based research. Readers will learn how to shift their mindset from simply shipping features to delivering tangible outcomes for both customers and the business. This comprehensive summary promises to cover all key insights, practical applications, and methodological frameworks, enabling readers to immediately apply these powerful habits in their own product development.
This book is primarily for product people, including product managers, designers, and software engineers, who are eager to create valuable and impactful digital products. It offers a blueprint for teams to continuously discover unmet customer needs and the solutions that effectively address those needs, ensuring long-term product success and business viability.
Chapter One: The What and Why of Continuous Discovery
This chapter defines continuous discovery and explains its importance in modern product development. It contrasts traditional, output-focused approaches with a more flexible, outcome-driven methodology, highlighting the evolution of product practices over time.
Defining Discovery vs. Delivery
Torres distinguishes between discovery—the work done to decide what to build—and delivery—the work done to build and ship a product. This distinction is crucial because many companies over-invest in delivery metrics (shipping on time and budget) while under-investing in discovery, leading to products that might be well-built but lack customer value. The book aims to correct this imbalance by emphasizing continuous discovery.
The Evolution of Modern Product Discovery
Historically, discovery was often controlled by business leaders in annual budgeting processes, leading to fixed timelines and a focus on projects. This often resulted in software development being unpredictable, with products frequently delivered late, over budget, and, critically, not meeting customer needs. This old way of working led to significant waste. The Agile Manifesto (2001) emerged as a response, advocating for shorter cycles, frequent customer feedback, sustainable pace, flexibility, and simplicity in software development.
While Agile improved delivery, many teams still struggled with discovery. Leaders often clung to original ideas, and usability testing was often too late in the process. However, the rise of more instrumentation allowed teams to measure when features went unused, increasing awareness of the “building the wrong stuff” problem. This led to a shift: decision-making began moving from business stakeholders to product managers and then to the entire product team, incorporating customer engagement throughout the discovery process, not just at the end. This evolution culminates in continuous discovery, where teams engage with customers regularly to adapt in real-time.
The Working Definition of Continuous Discovery
Torres provides a precise definition for continuous discovery, emphasizing its core components:
- Weekly touchpoints with customers: At a minimum, teams should engage with customers every week.
- By the team building the product: The actual product trio (product manager, designer, engineer) should be directly involved in customer interactions.
- Where they conduct small research activities: Focus on lightweight, iterative research rather than large, infrequent studies.
- In pursuit of a desired outcome: All discovery efforts should be tied to clear business and customer outcomes.
This continuous cadence ensures daily product decisions are informed by customer input.
The Product Trio: Who This Book Is For
The book is written for the product trio, comprising a product manager, a designer, and a software engineer. This cross-functional unit is collectively responsible for ensuring products create value for the customer and value for the business. While other roles (marketers, data analysts) contribute, the core decision-making unit for adopting these habits is typically this trio, though it can expand to a “quartet” or “quintet” depending on the team’s needs. The key is to balance speed of decision-making with inclusiveness.
Prerequisite Mindsets for Continuous Discovery
Six key mindsets are essential for successfully adopting continuous discovery habits:
- Outcome-oriented: Success is defined by the value created for customers and business (outcomes), not just features shipped (outputs).
- Customer-centric: The customer is at the center of all efforts, aligning customer needs with business needs.
- Collaborative: Embrace cross-functional teamwork and shared decision-making, rejecting siloed hand-offs.
- Visual: Use drawing and mapping to externalize thinking and leverage human spatial reasoning for clearer understanding.
- Experimental: Adopt a scientific-thinking hat, identifying assumptions and gathering evidence to test them.
- Continuous: Shift from a project mindset to a continuous mindset, infusing discovery throughout the development process for fast answers to daily questions.
Chapter Two: A Common Framework for Continuous Discovery
This chapter introduces the fundamental framework for continuous discovery, emphasizing the shift from outputs to outcomes and presenting the Opportunity Solution Tree (OST) as a visual guide for product teams.
Beginning With the End in Mind: Outcomes Over Outputs
The evolution of product discovery emphasizes a shift from an output mindset to an outcome mindset. Instead of obsessing over features, the focus moves to the impact those features have on customers and the business. Starting with clearly defined outcomes is the foundation for product success. When a product trio is tasked with an outcome, they gain autonomy to find the best solutions, encouraging them to create value for the customer. However, it’s crucial to pair this with a customer-centric mindset to avoid pitfalls like the Wells Fargo scandal, where a business outcome was pursued at the cost of customer trust and ethical practices.
The Challenge of Driving Outcomes
Many product trios lack experience in driving outcomes, having been told what to build in the past. Simply talking to customers weekly is not enough; the purpose of customer touchpoints is to conduct research in pursuit of a desired outcome. This means understanding how to frame the problem in a customer-centric way, discovering the customer needs, pain points, and desires that, if addressed, would drive the business outcome. These customer needs, pain points, and desires are collectively referred to as opportunities, forming the “opportunity space.” This space includes both “problems to solve” and “desires to satisfy.”
The Underlying Structure of Discovery: Opportunity Solution Tree (OST)
Torres introduces the Opportunity Solution Tree (OST) as a simple, visual framework to guide discovery work. It outlines a structured approach:
- Root: The desired outcome, representing the business value to be created.
- First Level: The opportunity space, encompassing customer needs, pain points, and desires that can drive the outcome.
- Second Level: The solution space, where potential solutions to address the identified opportunities are explored.
- Third Level: Assumption tests, used to evaluate which solutions are most effective and viable.
Benefits of Opportunity Solution Trees
OSTs offer multiple benefits for product trios:
- Resolving tension between business and customer needs: By starting with a business outcome and filtering opportunities based on their potential to drive that outcome, OSTs ensure both aspects are considered.
- Building and maintaining shared understanding: Visualizing options helps the team align and collaborate over time, reducing fruitless opinion battles.
- Adopting a continuous mindset: OSTs help break large, project-sized opportunities into smaller, solvable ones, enabling iterative value delivery.
- Unlocking better decision-making: They encourage a “compare and contrast” mindset instead of “whether or not” decisions, helping teams avoid common decision-making villains like narrow framing, confirmation bias, short-term emotions, and overconfidence.
- Unlocking faster learning cycles: By explicitly mapping understanding of the customer and solution space, OSTs allow teams to quickly revise assumptions when solutions fail, fostering a cycle of learning and iteration.
- Building confidence in knowing what to do next: The tree’s structure guides teams on where to focus their discovery efforts, whether it’s more interviews or deeper ideation.
- Unlocking simpler stakeholder management: OSTs provide a clear, visual way to show work, explain thinking, and have better conversations with stakeholders, fostering buy-in without dictating outputs.
OSTs and Decision-Making Nuances
The framework helps teams make two-way door decisions—reversible choices that allow for quick course correction based on new learning. This contrasts with “one-way door decisions” which are hard to reverse and require much more caution. For discovery, most decisions are two-way door, allowing teams to move fast and learn from consequences rather than falling into analysis paralysis. The iterative nature means that even if a decision is suboptimal, it can be quickly adjusted.
Chapter Three: Focusing on Outcomes Over Outputs
This chapter deepens the understanding of outcomes, distinguishing between different types of metrics and outlining how product trios can effectively negotiate and manage outcomes with their leadership.
Why Outcomes?
Managing by outcomes provides teams with autonomy, responsibility, and ownership to find the best solutions. It shifts the focus from delivering fixed roadmaps to solving customer problems or addressing business needs. This strategy inherently leaves room for doubt and exploration, allowing teams to pivot quickly if initial solutions don’t yield the desired impact. Without clear outcomes, discovery work can become endless and frustrating. This approach is supported by industry best practices and research indicating that challenging, specific goals (when teams are committed and believe they can achieve them) drive better performance.
Exploring Different Types of Outcomes
Torres differentiates three types of outcomes to guide product teams:
- Business Outcomes: Measure overall business progression (e.g., grow revenue, reduce costs, increase market share). These are often lagging indicators, making it hard for teams to act proactively.
- Product Outcomes: Measure how well the product moves the business forward. These are typically leading indicators and are within the product trio’s direct span of control. For example, Sonja’s team at tails.com shifted from 90-day retention (lagging business outcome) to 30-day and 5-day retention, and then to increasing perceived value of tailor-made dog food and number of dogs who liked the food (leading product outcomes).
- Traction Metrics: Measure usage of a specific feature or workflow. These are generally outputs in disguise if assigned as the primary goal, limiting team autonomy. They are appropriate for junior teams to gain experience or for mature products undergoing optimization challenges where broader discovery questions have been answered.
Outcomes Are the Result of a Two-Way Negotiation
Setting a team’s outcome should be a two-way negotiation between the product leader and the product trio. The product leader communicates strategic intent and what’s most important for the business, identifying appropriate product outcomes. The product trio brings customer and technology knowledge, estimating how much the metric can be moved in a given period. This negotiation ensures the outcome is ambitious yet achievable, potentially leading to adjustments in strategy or resource allocation. Research shows that teams involved in setting their own outcomes take more initiative and perform better.
Do You Need S.M.A.R.T. Goals?
While S.M.A.R.T. (Specific, Measurable, Achievable, Relevant, Time-bound) goals are common, research suggests nuances for complex product work. For complex tasks, challenging goals can decrease performance if teams lack strategies. It’s often more effective to start with a learning goal (e.g., “discover the strategies that might work”) before committing to a performance goal (e.g., “increase engagement by 10%”). This allows teams time to learn how to best measure and impact a new outcome, like Sonja’s team who focused on learning what led to churn before refining their retention metric.
A Guide for Product Trios (Tips for Adoption)
Torres offers advice for product trios based on their current situation:
- If asked to deliver outputs: Ask leaders for business context and the desired business outcome. Try to connect outputs to potential product outcomes.
- If leader sets outcomes with little input: Map out potential product outcomes that drive the business outcome and clearly communicate how much the team can realistically move the metric.
- If trio sets own outcomes: Proactively seek business context from leaders (company vision, strategic initiatives, customer segments) to ensure alignment.
- If already negotiating outcomes: Continuously verify that the outcome is a product outcome (not business/traction metric), that traction metrics are well-known behaviors, that learning goals are set for new metrics, and that specific challenging goals are used for experienced metrics.
Avoiding Common Anti-Patterns in Outcome Setting
Product teams should avoid several pitfalls when setting outcomes:
- Pursuing too many outcomes at once: Spreads efforts thin, leading to minimal impact on any single outcome. Focus on one outcome at a time for greater impact.
- Ping-ponging from one outcome to another: Prevents teams from reaping the benefits of the learning curve for a new metric. Commit to an outcome for a few quarters.
- Setting individual outcomes: Leads to misaligned efforts within the product trio. Set team outcomes that encourage collaboration.
- Choosing an output as an outcome: Confusing features with impact. Always ask, “What value will this output create?“
- Focusing on one outcome to the detriment of all else: Neglecting other critical metrics. Monitor health metrics to ensure primary outcome pursuit doesn’t cause negative side effects (e.g., customer satisfaction, ethical considerations).
Chapter Four: Visualizing What You Know
This chapter emphasizes the power of visualization, specifically through experience maps, as a critical tool for product trios to understand their customers and align their collective knowledge.
The Purpose of Visualizing Knowledge
When tackling a desired outcome, especially for the first time, it’s easy to get overwhelmed by the infinite opportunity space. To make sense of it, product trios must first inventory their existing knowledge about the customer’s experience. This is crucial for cross-functional teams, as each member brings a unique perspective and background (e.g., product manager with customer complaints, designer with user confusion, engineer with technical flow). Experience maps allow teams to visualize and merge these individual perspectives into a shared understanding, explicitly capturing hunches, open questions, and areas for validation. This visual artifact acts as a guide for future customer interviews.
Setting the Scope of Your Experience Map
To prevent overwhelming detail, the scope of the experience map should be constrained by the desired outcome. For instance, a team aiming to increase application submissions would map the customer’s experience filling out the application, focusing on what prevents completion. For broader outcomes, like increasing “average minutes watched” for a streaming service, the scope might be “How do customers entertain themselves with video?” The key is to find a scope that provides enough room for exploration while remaining focused on the outcome.
Starting Individually to Avoid Groupthink
To leverage the diverse knowledge within the trio and prevent groupthink, each member should create their own individual experience map first. While this might feel inefficient due to duplicated effort, it ensures that all unique perspectives are captured without early influence from others. This individual work allows each person to fully externalize their thoughts before the collaborative synthesis.
Experience Maps Are Visual, Not Verbal
Drawing is a critical thinking aid that helps externalize thoughts and identify gaps more easily than verbal descriptions. Despite potential discomfort with drawing skills, the goal is not artistic quality but visualizing thinking. Using boxes, arrows, and simple figures forces concrete representation, preventing the vagueness inherent in language. The focus should be on depicting the customer’s experience, including their actions, thoughts, feelings, and obstacles, rather than internal product flows. This visual clarity supports a deeper, shared understanding.
Exploring Diverse Perspectives on Your Team
After individual mapping, the trio should share their drawings and actively ask questions to understand each other’s viewpoints. The emphasis should be on curiosity about differences, not judgment of accuracy. Each perspective, even if incomplete or seemingly “wrong,” contributes to a richer collective understanding. This sharing phase is about listening and clarifying, not advocating for one’s own map.
Co-Creating a Shared Experience Map
The final step is to synthesize individual maps into a shared team experience map. This involves:
- Converting maps to nodes and links: Identify distinct moments, actions, or events (nodes) and their connections (links).
- Arranging and collapsing nodes: Combine all individual nodes into a comprehensive map, then merge similar nodes, ensuring sufficient detail is retained.
- Determining links: Use arrows to show the flow, including happy paths, re-do loops, and abandonment points.
- Adding context: Capture what customers are thinking, feeling, and doing at each step, ideally visually, to aid synthesis and future recall for the team and stakeholders. This map should be viewed as a first draft, continuously evolving with new learning.
Avoiding Common Anti-Patterns in Visualization
- Getting bogged down in endless debate: Use drawing to resolve disagreements, as it forces specificity and reveals points of alignment or true divergence.
- Using words instead of visuals: Resist the urge to revert to text; drawing engages different cognitive processes that reveal patterns and insights.
- Moving forward as if your map is true: Remember the map is a hypothesis of current understanding, not a definitive truth, and needs to be tested.
- Forgetting to refine and evolve your map: This is not a one-time activity. Continuously update the map as new customer insights emerge to maintain a shared understanding and prevent individual perspectives from diverging.
Chapter Five: Continuous Interviewing
This chapter explores the crucial habit of continuous interviewing, delving into why it’s vital for discovering customer opportunities and how to conduct effective story-based interviews.
The Purpose of Continuous Interviewing
Continuous interviewing is fundamental for discovering customer opportunities—their unmet needs, pain points, and desires. It’s not about asking customers what to build, but uncovering deeper insights. Steve Jobs, despite his famous quote about customers not knowing what they want, was a master at identifying unmet needs, as exemplified by visual voicemail on the first iPhone. This feature addressed a subtle, unrecognized pain point (tedious sequential voicemail) that customers would never have articulated as a “want.” For product teams, continuous interviewing provides a reliable method to find these unarticulated opportunities.
Challenges With Asking People What They Need
Direct questions about behavior or preferences often lead to unreliable answers. People tend to describe their ideal behavior or rationalize actions with coherent but not necessarily true stories (the “left brain interpreter” phenomenon described by Daniel Kahneman). For instance, a customer might say “fit is number one” for jeans but actually buy based on brand or sale price. These cognitive biases mean that relying on direct “what do you want?” questions can lead to building the wrong product, as Torres experienced when her team built a passive-candidate recruiting solution that flopped because recruiters, despite saying they wanted it, continued to prioritize active candidates for speed.
Distinguishing Research Questions from Interview Questions
Effective interviewing requires a clear distinction:
- Research Questions: What the team needs to learn (e.g., “What needs, pain points, and desires matter most to this customer?”).
- Interview Questions: Specific prompts designed to elicit stories, not direct answers. The best way to learn about actual behavior is by asking for specific stories about past experiences. Instead of “What criteria do you use…?”, ask “Tell me about the last time you purchased a pair of jeans.” This grounds the answers in reality. The scope of story questions (e.g., “Tell me about the last time you watched any streaming entertainment” vs. “our service”) should be tailored to the immediate learning need, guided by the experience map.
Excavating the Story
Interviewers must actively “excavate” the story because conversational norms lead to short answers. Strategies include:
- Setting expectations: Inform participants to share full details and that the interviewer will ask for missing specifics later.
- Using temporal prompts: “Start at the beginning. What happened first?” and “What happened next?” to guide them along a timeline.
- Focusing on story elements: Ask about who was with them, challenges encountered, how they overcame them, and supporting characters.
- Gently guiding back: When participants generalize (“I usually…”), gently redirect them to the specific instance (“In this specific example, did you face that challenge?”). This practice takes time and patience but ensures reliable data.
Synthesize as You Go: The Interview Snapshot
Continuous interviewing means there’s no clear stopping point for synthesis. Instead, teams should synthesize as they go using an interview snapshot. This is a one-pager designed to capture actionable insights from a single interview. Key elements include:
- Visuals: A photo of the participant (with permission) or a representative visual to aid memory.
- Memorable Quote: A striking quote that encapsulates a key moment or emotion from their story.
- Quick Facts: Contextual information about the customer (e.g., segment, usage data) to help compare stories.
- Opportunities: Customer needs, pain points, or desires, framed using the customer’s words, not solutions. If a feature is requested, ask “What would that do for you?” to uncover the underlying need.
- Insights: Any interesting observations that don’t fit directly as opportunities but are worth capturing.
- Story Map Drawing: A simple visual representation of the participant’s unique story, showing key moments and their flow. This helps in understanding and later identifying patterns across multiple interviews.
Interview Every Week: The Keystone Habit
Weekly interviewing is foundational to a strong discovery practice. It helps explore the ever-evolving opportunity space and ensures fast answers to daily questions. It’s a keystone habit because consistently engaging with customers naturally drives the adoption of other discovery habits, like rapid prototyping and assumption testing. Maintaining a weekly cadence is easier than starting and stopping, making it robust against unexpected challenges.
Automating the Recruiting Process
To make continuous interviewing sustainable, recruiting must be automated:
- In-product recruitment: Integrate a simple question in your product (e.g., “20 minutes for $20?”). For high-traffic sites, ask for a phone number; for lower traffic, use scheduling software.
- Leverage customer-facing colleagues: Ask sales, account managers, or support teams to recruit by joining existing meetings or using defined triggers to schedule interviews with specific customers. Make it easy for them with clear scripts.
- Customer Advisory Boards: For hard-to-reach or niche audiences, establish a board of customers willing to participate in regular one-on-one interviews, offering ongoing incentives. This ensures consistent access while acknowledging the risk of designing for a small subset.
Interview Together, Act Together
Product trios should interview together to ensure shared understanding and avoid one person becoming the sole “voice of the customer.” Diverse perspectives within the trio mean that each member will pick up on different salient data points during an interview, leading to richer insights. This collaboration fosters collective ownership and strengthens team decisions.
Avoiding Common Anti-Patterns in Interviewing
- Relying on one person to recruit/interview: Spreads knowledge too thin and makes the habit fragile. Everyone on the team should be proficient.
- Asking who, what, why, how, and when questions: Avoid direct, factual questions that lead to unreliable data. Focus on story-based questions.
- Interviewing only when you think you need it: Disrupts the continuous flow and delays learning. Maintain weekly interviews regardless of immediate perceived need.
- Sharing raw notes/recordings: Overwhelms colleagues. Use interview snapshots to synthesize and share actionable insights.
- Stopping to synthesize a set of interviews: In a continuous model, synthesize as you go using snapshots, rather than waiting for large batches of interviews to conclude.
Chapter Six: Mapping the Opportunity Space
This chapter details the crucial process of organizing and structuring the vast array of customer needs, pain points, and desires into a coherent map using the Opportunity Solution Tree.
The Power of Opportunity Mapping
Customer stories are rich with needs, pain points, and desires, but it’s easy to get overwhelmed. Opportunity mapping provides a critical way to take inventory and give structure to the infinite opportunity space. This process helps teams decide which opportunities are most important to address now and which should be deferred. The goal is to address opportunities that not only serve the customer but also drive the desired business outcome, ensuring the product’s long-term viability. As John Dewey notes, good thinking requires systematic and protracted inquiry, exploring options deliberately rather than jumping to the first solution. The opportunity space is dynamic, constantly evolving, expanding, and contracting as new insights emerge.
Taming Opportunity Backlogs
Managing opportunities as a flat, prioritized list (an “opportunity backlog”) is difficult because opportunities vary in scope and interrelation. For example, “I can’t find anything to watch” and “I’m out of episodes of my favorite shows” are related but not directly comparable in a flat list. Similarly, opportunities like “I want to watch shows on my flight” and “I want to watch shows on my train commute” might seem similar but have distinct contexts. A flat list makes it hard to compare “big, hard problems” with “easy” ones or to understand their dependencies. The opportunity space’s complexity demands a more structured approach than a simple list.
The Power of Trees for Structuring Opportunities
The Opportunity Solution Tree (OST) provides a powerful visual framework to manage the complexity of the opportunity space. It depicts two key relationships:
- Parent-child relationships: A child opportunity is a subset of a parent opportunity. For example, “I’m out of episodes of my favorite shows” is a child of “I can’t find anything to watch.” This helps break down large, intractable problems into smaller, more solvable ones.
- Sibling relationships: Opportunities that are distinct but all descend from the same parent. Siblings can be addressed independently while collectively contributing to the parent opportunity. For example, “I can’t figure out how to search for a specific show” and “The show I was watching is no longer available” are siblings under “I can’t find anything to watch.”
This tree structure enables iterative value delivery. Instead of tackling a huge problem like “Is this show any good?”, it breaks it into smaller, shippable solutions like “Who is in this show?” or “What type of show is this?” Delivering these smaller solutions iteratively eventually addresses the larger opportunity.
Identifying Distinct Branches (Top-Level Opportunities)
To properly structure the opportunity space, it’s essential to identify distinct moments in time during the customer’s experience, ensuring no overlap between branches. Two primary strategies for this are:
- Using the Experience Map (Chapter 4): The key nodes or steps from your customer experience map can directly form the top-level opportunities on your OST.
- Analyzing Interview Story Drawings (Chapter 5): Identify recurring key moments (nodes) across multiple customer stories and stitch them together to form a generalized experience map, which then informs your top-level opportunities.
For a streaming service, distinct moments might be “Deciding to watch something,” “Choosing something to watch,” “Watching something,” and “The end of the watching experience.” These become the top-level branches of the OST.
Taking an Inventory of the Opportunity Space
Once distinct branches are established, review interview snapshots to inventory opportunities. For each potential opportunity, ask:
- Is it framed as a customer need, pain point, or desire (not a solution)? (e.g., “I don’t like typing long movie titles,” not “I wish I had voice search”).
- Is it unique to this customer or a recurring pattern? Focus on patterns seen in multiple interviews.
- Will addressing it drive the desired outcome? This ensures strategic alignment.
Only opportunities meeting all three criteria should be added to the tree, grouped under the most relevant branch. If an opportunity fits multiple branches, reframe it more specifically or split it into distinct sub-opportunities.
Adding Structure to Each Branch
After inventorying, structure each branch by:
- Grouping similar opportunities: Identify siblings (distinct but related) and opportunities that are simply different phrasings of the same underlying need.
- Identifying parent opportunities: For sibling groups, look for a higher-level need that encompasses them. This parent might have been explicitly mentioned or be implied by the children.
- Merging duplicates: Combine opportunities that are essentially the same, often by reframing one to be more encompassing.
- Iterating: Continue this process, breaking down larger opportunities into smaller sub-opportunities, until a clear, multi-level tree structure emerges within each main branch. This creates a logical hierarchy that aids in prioritization and understanding.
Just Enough Structure
The goal is “just enough structure”: sufficient to see the big picture without being overwhelmed by excessive detail or debate. Opportunity mapping is an iterative process, not a one-time exercise. It will continually evolve as the team learns more about customers, reframing, subdividing, and moving opportunities as understanding deepens. The first draft should simply capture current knowledge, trusting that it will be refined over time.
Avoiding Common Anti-Patterns in Opportunity Mapping
- Opportunities framed from your company’s perspective: Opportunities must be framed from the customer’s point of view. (e.g., “I want access to more compelling content,” not “I wish I had more streaming-entertainment subscriptions”). Verify if the opportunity has been heard in interviews.
- Vertical opportunities: A chain of single parent-child relationships indicates missing sibling opportunities. If a sub-opportunity only partially solves the parent, identify and explore other potential siblings in future interviews.
- Opportunities have multiple parent opportunities: If an opportunity fits under more than one parent, it’s likely too broad. Get more specific and define it distinctly for each relevant moment in time.
- Opportunities are not specific enough: Vague opportunities like “I wish this was easy to use” are not actionable. Refine them to be specific, like “Entering a movie title using the remote is hard.”
- Opportunities are solutions in disguise: If an opportunity has only one possible solution, it’s a solution request, not an opportunity. Always ask, “Is there more than one way to address this?” (e.g., “I don’t like commercials” vs. “I wish I could fast-forward through commercials”).
- Capturing feelings as opportunities: Feelings are signposts to opportunities, not the opportunities themselves. Instead of “I’m frustrated,” identify the cause: “I hate typing in my password every time I purchase a show.” This allows for actionable solutions.
Chapter Seven: Prioritizing Opportunities, Not Solutions
This chapter emphasizes the strategic importance of prioritizing opportunities over solutions, guiding product trios through a systematic approach to make impactful decisions.
The Problem with Solution-First Mindset
Many product teams fall into “the build trap,” as coined by Melissa Perri, where success is measured by outputs (features shipped) rather than outcomes (value created). This obsession with solutions leads to endless backlogs and reactive strategies focused on competitors. Product strategy truly happens in the opportunity space, emerging from decisions about which outcomes to pursue, customers to serve, and opportunities to address. Rushing straight to feature prioritization, rather than deeply understanding and selecting the right opportunities, means customers often don’t care about feature releases, leading to a lack of real impact.
Focusing on One Target Opportunity at a Time
To deliver value iteratively and maintain an Agile mindset, a product trio should focus on addressing only one opportunity at a time. This approach, consistent with Kanban’s principle of limiting work in progress, allows the team to explore multiple solutions for that specific opportunity (Chapter 8), enabling better compare-and-contrast decisions. Spreading effort across too many opportunities leads back to a waterfall-like lack of focus and delayed value delivery.
Using the Tree to Aid Decision Making
The Opportunity Solution Tree (OST) is invaluable for optimizing prioritization. Instead of assessing a flat list, teams leverage the tree’s hierarchy:
- Assess top-level opportunities: Compare and contrast the parent opportunities, selecting the highest priority one that is most likely to drive the desired outcome. This is a “compare and contrast” decision, not a “whether or not” one.
- Focus on the chosen branch: Once a top-level parent is selected, the team can effectively ignore other branches for the immediate prioritization cycle, significantly reducing the scope of assessment.
- Iterate down the branch: Repeat the assessment and prioritization process for the children of the selected opportunity, continuing until a leaf-node opportunity (one with no further children) is identified as the target.
Choosing a leaf-node opportunity ensures that the team delivers iterative value by solving a series of smaller, complete problems, rather than tackling sets of opportunities all at once.
Assessing a Set of Opportunities
When assessing a set of sibling opportunities, Torres recommends using four criteria, fostering a data-informed, subjective comparison rather than precise scoring:
- Opportunity Sizing: How many customers are affected, and how often? Teams make rough estimates using behavioral data, support tickets, surveys, or interview snapshots. Distinguish between impact on “how many” and “how often.”
- Market Factors: How will addressing this opportunity affect our market position? Consider whether it’s a “table stakes” item, a “strategic differentiator,” or influenced by external trends (e.g., “cord-cutters” for streaming services).
- Company Factors: How does it align with the company’s vision, mission, and strategic objectives? Consider organizational context, political climate, and team strengths/weaknesses. Prioritize opportunities that support company goals.
- Customer Factors: How important is this opportunity to our customers, and how satisfied are they with existing solutions? Prioritize important opportunities where current satisfaction is low.
Embracing the Messiness
Avoid rigid scoring or quantitative formulas for prioritization. This is a messy, subjective decision that benefits from healthy debate and considering different “lenses” of impact. There won’t always be a clear “winner,” and that’s okay. By treating it as a messy decision, teams leave room for doubt, making them more likely to course-correct if later learning reveals a less-than-optimal choice. This balance between confidence and doubt is key to wisdom, as Karl Weick suggests.
Two-Way Door Decisions
Prioritizing opportunities, despite their strategic importance, should be treated as two-way door decisions (reversible), not one-way door decisions (irreversible), a concept popularized by Jeff Bezos. This means teams should move fast and learn from acting rather than striving for perfect data or falling into analysis paralysis. The beauty of continuous discovery is the ability to course-correct quickly. Viewing decisions as reversible helps combat confirmation bias, as studies show people who view decisions as reversible are more likely to critically evaluate their choice and consider alternatives.
Avoiding Common Anti-Patterns in Prioritization
- Delaying a decision until there is more data: Resist the urge for perfection. Time-box decisions (e.g., an hour or two) and trust that further discovery or testing will reveal if a course correction is needed.
- Over-relying on one set of factors: Use all four lenses (sizing, market, company, customer) to gain a holistic perspective. Neglecting one leads to imbalanced decisions.
- Working backwards from your expected conclusion: Approach the exercise with an open mind, allowing the data and discussion to genuinely guide the decision rather than just justifying a pre-conceived idea. This prevents confirmation bias and fosters new insights.
Chapter Eight: Supercharged Ideation
This chapter focuses on effective ideation techniques, emphasizing quantity over quality initially to generate diverse and original solutions for a chosen target opportunity.
Quantity Leads to Quality in Ideation
Many teams jump to their first solution, but creativity research shows that fluency (quantity of ideas) correlates with flexibility (diversity) and originality (novelty). The most original ideas often emerge later in an ideation session. Even with overflowing backlogs, teams often only have one or two solutions for a given opportunity. The goal is to push past obvious, mediocre “first ideas” to discover more diverse and innovative solutions. For strategic opportunities, generating multiple ideas is crucial to ensure the best ones are uncovered, rather than settling for the initial, easy answer.
The Problem With Brainstorming
Traditional brainstorming, popularized by Alex Osborn in 1953, with rules like “focus on quantity” and “defer judgment,” has been found to be less effective than individual ideation by decades of academic research. Studies consistently show that individuals generating ideas alone outperform groups in terms of quantity, diversity, and originality. This is due to:
- Social loafing: Individuals work harder alone.
- Group conformity: Early ideas set a conservative tone, and members self-censor.
- Production blocking: Ideas are lost when others speak.
- Downward norm setting: Group performance can be limited by the weakest member.
Brainstorming advocates often experience an “illusion of group productivity” (Nijstad and Paulus), where groups feel more productive due to reduced “cognitive failures” (getting stuck), even if their actual output is lower. The benefit of hearing other people’s ideas, however, can be leveraged if ideation remains individual.
Getting Unstuck During Ideation
Generating ideas individually can be challenging. To overcome creative blocks:
- Take frequent breaks: Spread ideation throughout the day and change scenery to foster new ideas.
- Leverage incubation: Allow your brain to continue processing the problem unconsciously. If stuck, sleep on it.
- Look to analogous products for inspiration: Beyond competitors, explore how unrelated industries have solved similar problems (e.g., Velcro’s inspiration from a cocklebur, or job boards learning from online shopping sites for evaluation). This broadens the solution space.
- Consider extreme users: Think about what power users, first-time users, users with disabilities, or users in different locations/demographics might need. This can spark diverse ideas that might benefit all users.
- Embrace wild ideas: Don’t censor thoughts, no matter how outlandish. Wild ideas can often inspire more feasible, yet still innovative, solutions through combination and adaptation.
Putting Supercharged Ideation Into Practice
Follow a structured process for effective ideation:
- Review your target opportunity: Ensure everyone understands the specific need, pain point, or desire and its context.
- Generate ideas alone: Each trio member independently brainstorms as many solutions as possible. Push through initial blocks by taking breaks and seeking inspiration.
- Share ideas across your team: In a real-time or asynchronous setting, describe each idea, allowing questions and “riffing” to spark new thoughts.
- Repeat steps 2 and 3: This cycle ensures that hearing others’ ideas inspires more individual ideas, helping the team reach 15 to 20 diverse solutions for the target opportunity, pushing past the obvious ones.
Evaluating Your Ideas (Dot-Voting)
Once a large quantity of ideas is generated, it’s time to evaluate:
- Filter for relevance: First, quickly remove any ideas that do not actually solve the target opportunity, even if they are interesting.
- Dot-vote to select top three: Research indicates groups are better at evaluation. Each team member gets three votes to distribute across the remaining ideas. The sole criterion is how well the idea addresses the target opportunity.
- Iterate on voting: If no clear top three emerge, discuss the ideas, allowing each person to pitch their choices and explain their reasoning, then vote again. The goal is to select three distinct ideas that the team is excited to explore further, setting up a good compare-and-contrast decision for subsequent testing.
Avoiding Common Anti-Patterns in Ideation
- Not including diverse perspectives: Ideation is best done with the entire team and potentially key stakeholders, ensuring a wider range of ideas.
- Generating too many variations of the same idea: While variations are fine, actively work to identify categorically different ideas. Seek inspiration from analogous products outside your industry to diversify.
- Limiting ideation to one session: Recognize that creativity benefits from incubation and spreading ideation over time, not just one meeting.
- Selecting ideas that don’t address the target opportunity: Rigorously filter out irrelevant ideas before voting. Stay true to the strategic decision made in opportunity prioritization.
Chapter Nine: Identifying Hidden Assumptions
This chapter focuses on uncovering the unspoken assumptions underlying product ideas, transforming a natural tendency for overconfidence into a rigorous process of risk mitigation.
The Need to Be Prepared to Be Wrong
Product teams are often susceptible to confirmation bias (seeking evidence that confirms beliefs) and the escalation of commitment (increasing commitment to an idea with increased investment). This leads to overconfidence in ideas, like the Portland affordable housing project which failed due to untested assumptions about family size. To make better decisions and avoid these pitfalls, teams must be prepared to be wrong. Working with a set of ideas rather than just one (as in Chapter 8) naturally fosters a compare-and-contrast mindset, which helps mitigate these biases. The key to rapid iteration is to test assumptions, not whole ideas, as testing assumptions is faster and helps guard against escalation of commitment. Marty Cagan notes that the best teams do 10-20 discovery iterations per week, a pace only possible through assumption testing.
Types of Assumptions
Assumptions can be categorized to ensure comprehensive coverage:
- Desirability assumptions: Do customers want or value the solution? Will they use it, and are they willing to do what’s required? (e.g., “Our subscriber wants to watch sports on our platform”).
- Viability assumptions: Will the solution work for the business? Does it create a return (revenue, cost savings, strategic advantage) that justifies the effort? (e.g., “Integrating local channel feeds won’t be too expensive”).
- Feasibility assumptions: Can we build it? This includes technical possibility, as well as organizational constraints like legal, security, compliance, or cultural support. (e.g., “Our platform is available when our subscriber wants to watch sports”).
- Usability assumptions: Can customers find, understand, and effectively use the solution? Is it accessible? (e.g., “Our subscriber can find where to go on our platform to find sports”).
- Ethical assumptions: Does the solution have the potential for harm (to users, society, or the business)? This includes data privacy, addictive potential, perpetuating inequalities, or brand damage. Questioning potential negative impacts is often a blind spot for teams.
Story Map to Get Clarity
Vague ideas often hide individual, unstated assumptions. Story mapping is a powerful technique to align the team on what an idea means and to surface these assumptions. For each solution idea, assume it already exists and map out:
- Key actors: Who interacts with the solution (e.g., sports consumer, streaming platform, local TV channel partners).
- Steps each actor takes: Be specific about the sequence of actions needed for users to get value from the solution.
- Horizontal sequence over time: Lay out steps sequentially, including optional paths and successful flows.
This detailed mapping forces specificity, revealing implicit assumptions about user behavior and system capabilities.
Using Story Maps to Generate Assumptions
Every step in a story map implies several assumptions across desirability, usability, and feasibility. By literally walking through each step, teams can systematically generate dozens of assumptions. For example, if a step is “Our subscriber comes to our platform to watch live sports,” assumptions include: “Our subscriber wants to watch sports,” “Our subscriber knows they can watch sports on our platform,” and “Our platform is available.” Even simple maps can yield many assumptions. The goal is to generate as many as possible to increase the likelihood of uncovering the riskiest ones, even though most will be harmless.
Conducting a Pre-Mortem
A pre-mortem, adapted from Gary Klein, is a powerful technique to generate assumptions, particularly viability and ethical ones. Teams imagine it’s six months in the future, and the product/initiative has completely failed. Then, they brainstorm all possible reasons for this failure. This leverages prospective hindsight to expose hidden assumptions that, if false, would lead to failure. Phrasing the outcome as “certain” failure is crucial for effectiveness, as found by Mitchell, Russo, and Pennington.
Walking the Lines of Your Opportunity Solution Tree
To uncover viability assumptions and the logical inferences behind an idea, work backward from the solution up the Opportunity Solution Tree:
- “This solution will address the target opportunity because…” (e.g., “Adding local channels will allow our subscribers to watch live sports because most of the major sports are on local channels“).
- “Addressing the target opportunity will drive the desired outcome because…” (e.g., “People will watch sports in addition to what they already watch“).
- Connect product outcomes to business outcomes: “People who watch more minutes are more likely to renew” and “The cost of adding local channels will be offset by the gain from more renewals.”
Each logical inference made is an assumption that can be tested.
Exploring Potential Harm (Ethical Assumptions)
Teams often overlook ethical assumptions. Ask: “What’s the potential harm in offering this solution?” Consider:
- Data practices: What data is collected, how is it used, and would customers be okay with this transparency?
- Addictive potential: Is the product designed to be habit-forming, and is this beneficial for the user?
- Inclusivity: Who is being left out due to assumptions about resources (money, time, internet access)?
- Societal impact: Does it exacerbate inequalities or harm relationships?
- Abuse potential: How might trolls or bad actors misuse the product?
A useful framing question is: “If the New York Times ran a front-page story about this solution, including internal conversations and data practices, would that be a good thing?“
Mixing and Matching the Methods
Teams don’t need to use every assumption-generating method for every idea, every time. Instead, mix and match methods to shore up specific team blind spots. For example, if a team struggles with viability assumptions, they can focus on walking the lines of the OST. If desirability assumptions are missed, story mapping can help. The goal is to explicitly enumerate assumptions that underpin an idea, preparing the team for testing.
Prioritizing Assumptions (Assumption Mapping)
Once a long list of assumptions is generated for each of the three ideas, use assumption mapping (David J. Bland) to identify “leap of faith” assumptions—those that carry the most risk and need immediate testing.
- X-axis: How much do we know about this assumption? (From weak evidence on the right to strong evidence on the left).
- Y-axis: How important is this assumption to the success of your idea? (From less important at the bottom to more important at the top).
Plot each assumption on this 2D grid, making quick, relative judgments. The assumptions in the top-most, right-most corner are the “leap of faith” assumptions – the riskiest ones to test first. This process should be fast (e.g., 10 minutes per idea) and repeated for all three solution ideas, preparing for the next stage of rapid assumption testing.
Avoiding Common Anti-Patterns in Identifying Assumptions
- Not generating enough assumptions: Teams often underestimate the number of underlying assumptions. Aim for 20-30 assumptions per idea to increase the chance of finding the riskiest ones.
- Phrasing assumptions such that you need them to be false: Always phrase assumptions as what needs to be true for the idea to succeed (e.g., “Customers will remember their passwords,” not “Customers won’t remember their password”). This makes them easier to test.
- Not being specific enough: Avoid vague assumptions like “Customers will have time.” Instead, be precise: “Customers will take the time to browse all options on our getting-started page.”
- Favoring one category at the cost of others: Most teams have biases (e.g., strong on usability, weak on viability). Use the five assumption categories (desirability, viability, feasibility, usability, ethical) to systematically check for and address team blind spots.
Chapter Ten: Testing Assumptions, Not Ideas
This chapter provides a detailed guide on how to design and execute effective assumption tests, focusing on rapid iteration, simulating experiences, and evaluating real behavior to mitigate risk.
Working With Sets of Ideas
When starting assumption testing, it’s crucial to continue with the compare-and-contrast mindset for all three brainstormed ideas from Chapter 8. Overcommitting to a single favorite idea is a trap due to confirmation bias and the escalation of commitment. By systematically collecting evidence for assumptions across all three ideas, teams can make better, less biased decisions about which ideas are most promising and identify a clear front-runner.
Simulate an Experience, Evaluate Behavior
The goal of assumption testing is to collect reliable data about actual behavior, not just opinions. A strong assumption test simulates a specific experience to give participants an opportunity to behave in line with (or against) the assumption. The simulation should be as minimal as possible to allow for quick iteration.
- Example: “Our subscribers want to watch sports” assumption. Simulate a home screen showing sports alongside TV shows and movies. Evaluate by observing how many participants choose a sporting event.
- Example: “Our subscriber wants to watch sports on our platform” assumption. Simulate a scenario where a big game is starting, offering it on three platforms (including yours). Evaluate how many choose your service.
The key is to define what success looks like upfront by setting specific evaluation criteria (e.g., “At least 3 out of 10 people choose sports”). This aligns the team on how to interpret results and guards against confirmation bias by preventing retroactive justification of outcomes. The negotiation around these numbers helps balance testing speed with actionable data.
Early Signals vs. Large-Scale Experiments
Resist the temptation to start with large-scale, quantitative experiments (e.g., surveys of hundreds, production A/B tests). Instead, start small to get early signals.
- Why small?: Small tests (e.g., 5-10 participants) are faster (a day or two vs. weeks) and help teams fail sooner. As Karl Popper states, “Good tests kill flawed theories; we remain alive to guess again.” This minimizes investment in faulty ideas.
- Progression: If a small test provides a positive early signal, then gradually invest in larger, more reliable tests.
- False negatives/positives: While small numbers can lead to false negatives (assumption seems false, but is true) or false positives (assumption seems true, but is false), their cost is low with rapid iteration. False negatives simply mean running more small tests, and false positives are typically caught in subsequent testing rounds. Triangulation (using multiple small tests with different methods) also helps mitigate these risks. The abundance of ideas and opportunities means discarding one due to a false negative isn’t catastrophic. The key is to mitigate risk, not seek absolute truth.
A Quick Word on Science (Product Teams vs. Scientists)
Product teams are not scientists seeking fundamental truths; they are trying to create products that improve customers’ lives. Their feedback loops are much faster (launching a product immediately reveals customer interaction), unlike scientific studies that take decades. While product teams should adopt a scientific mindset regarding reliability and validity, their research findings are confirming or disconfirming evidence, not absolute truths. The goal is to mitigate risk, doing just enough research to reduce risks to an acceptable level.
Running Assumption Tests (Tools and Methods)
Achieving 15-20 discovery iterations per week is possible with the right tools:
- Unmoderated user testing services: Tools like UserTesting allow teams to post stimuli (prototypes), define tasks, and get video recordings of participants completing them on their own time. This drastically reduces the time needed for recruitment, scheduling, and moderation (e.g., a few days instead of weeks).
- One-question surveys: Many assumptions can be tested with quick, single questions. When asking about past behavior, focus on specific instances (e.g., “When was the last time you watched a sporting event?”), not generalizations or future predictions. Surveys can also simulate experiences (e.g., “What are your favorite sports teams?” to test willingness to share data).
- Data mining: Leverage existing internal data (e.g., search queries in a database) to test assumptions, remembering to define evaluation criteria upfront to ensure alignment and guard against bias.
Most assumptions can be tested with a combination of these methods. The “assumption-simulate-evaluate” framework is key to strong assumption testing.
Avoiding Common Anti-Patterns in Testing Assumptions
- Overly complex simulations: Don’t spend weeks building a perfect simulation. Design fast tests (1-2 days, max a week) to gather quick signals and maintain high iteration speed.
- Using percentages instead of specific numbers in criteria: Be explicit about how many people to test with and how many need to exhibit desired behavior (e.g., “7 out of 10 people,” not “70%”). This avoids ambiguity and aligns the team.
- Not defining enough evaluation criteria: Ensure all necessary measurements are explicitly defined (e.g., for email tests, open rates, click-throughs, and subsequent actions).
- Testing with the wrong audience: Always test with participants who experience the target opportunity and represent desired variation (demographics, behaviors), not just the easiest audience to reach.
- Designing for less than the best-case scenario: For initial small tests, design them to be likely to pass with the most ideal audience. If they fail in the best-case, the results are less ambiguous and indicate a true problem. This helps learn more from failures.
Chapter Eleven: Measuring Impact
This chapter explains how to measure the real impact of product changes on desired outcomes, linking discovery and delivery to create a continuous feedback loop.
The Intertwined Nature of Discovery and Delivery
The AfterCollege story demonstrates how discovery and delivery are not distinct phases but are tightly coupled. Building a live prototype (delivery) was essential for testing critical assumptions (discovery) about how students would respond to a new job search interface. Initial positive results from the prototype (e.g., 83% of visitors starting their search compared to 36% on the old interface) led to further delivery and discovery questions. The team continued to experiment in production, collecting real data on search starts, job views, and applications. This continuous loop ensures that insights from delivery inform future discovery, and vice versa.
Don’t Measure Everything (Start Small)
It’s counterintuitive, but teams should not try to measure everything from the start when instrumenting their product. Attempting comprehensive upfront planning leads to paralysis, weeks of debate on event tracking and naming, and inevitably, mistakes. Instead, adopt an iterative approach to instrumentation, starting small and evolving as needed. The best strategy is to instrument what you need to evaluate current assumption tests, then gradually expand.
Instrumenting Your Evaluation Criteria
Begin by measuring only what’s necessary to evaluate your immediate assumption tests. For the AfterCollege example, the team focused on assumptions like:
- Students will start more searches: Measured
# of people who visited search pageand# of people who started a search. - Students will view jobs: Tracked
# of people who viewed at least one job. - Students will apply to jobs: Measured
# of people who applied for at least one job.
Crucially, they often counted the number of people who took an action, not just the raw number of actions, to understand the breadth of engagement. Later, for “relevance” assumptions, they started counting actions and comparing metrics like “position of job view” or “ratio of job views to applications” to understand effort. This selective instrumentation ensures data is relevant to the immediate learning goals.
Measuring Impact on Your Desired Outcome
Beyond assumption tests, it’s critical to measure impact on the desired outcome. At AfterCollege, the desired business outcome was to increase the number of students who found jobs through the platform. This was a challenging metric because job offers and hires happened off-platform. The team didn’t shy away from this difficulty. They developed a strategy to collect post-application data by sending an email 21 days after a job application asking students about their status (e.g., “I got the job,” “I got an interview”). While initially only a 5% reply rate, this grew to 37%, providing crucial visibility into the true impact. The lesson is to not be afraid to measure hard things and to continually chip away at improving measurement of the core outcome.
Revisiting Different Types of Outcomes (Product to Business Outcome Link)
The AfterCollege story highlights the connection between product outcomes (e.g., improved search starts) and business outcomes (e.g., increased hires). While the new interface boosted search starts and job applications (product outcomes), the team still needed to verify if this translated into more actual hires (business outcome). This required continued A/B testing in the production environment.
For the streaming example, testing if adding sports increases viewer minutes (product outcome) might require a small-scale, live prototype (e.g., streaming one sporting event). Measuring if this, in turn, increases subscriber retention (business outcome) is a longer-term evaluation. This emphasizes that the link between a product outcome and a business outcome is a theory that needs to be tested and continuously monitored over time. If a product outcome stops driving the business outcome, the team must find a new, more impactful product outcome.
Avoiding Common Anti-Patterns in Measuring Impact
- Getting stuck trying to measure everything: Avoid treating instrumentation as a large waterfall project. Start small, focusing on immediate assumption test needs, then product outcome, and finally, business outcome.
- Hyperfocusing on assumption tests and forgetting the OST: It’s exciting when solutions work, but remember the ultimate goal: satisfying customer needs while creating value for the business. Always connect solutions back to the desired outcome on the Opportunity Solution Tree to ensure viability and long-term success.
- Forgetting to test the connection between your product outcome and your business outcome: This connection is a hypothesis. Continuously evaluate if driving the product outcome actually drives the business outcome. This ensures the business remains viable and can continue serving customers.
Chapter Twelve: Managing the Cycles
This chapter delves into the messy, non-linear reality of continuous discovery, using real-world examples to illustrate how teams navigate surprises, pivot when necessary, and continually improve their process.
The Messy Reality of Discovery
The continuous discovery habits are not a linear process; they are a messy, winding path with twists and turns. The true work of discovery is managing these cycles, looping back to earlier steps when surprises arise. Real-world examples demonstrate how successful product teams adapt when assumptions prove false or new constraints emerge, relying on the core discovery habits to guide their way.
Simply Business: Not All Opportunities Need Solutions
Mina Kasherova’s team at Simply Business initially identified “the havoc of late client payments” as a high-priority opportunity based on customer interviews and market data. They ideated and launched three assumption tests (articles, invoice discounts, automated collection).
- The articles test failed (low click-through), raising initial concerns.
- The usability tests for discounting and automation revealed a critical insight: customers struggled to understand the solutions, and, more importantly, they didn’t want third-party help with collections, fearing it would harm client relationships.
This surprising disconfirming evidence meant the opportunity, while real, was not one Simply Business’s customers wanted their company to solve, at least not in the way envisioned. Mina’s team quickly pivoted to a new target opportunity on their OST, enabled by their strong habit of weekly interviews. This demonstrated the value of quick assumption tests to avoid investing in unfruitful opportunities and the wisdom to course-correct. The time saved from not building the wrong feature was a significant win.
CarMax: The Importance of Now, Next, Future
Victoria Lawson’s team at CarMax focused on the opportunity: “I want to feel confident that this car is in good condition.” They saw competitors highlighting cosmetic issues but knew CarMax’s reconditioning process fixed many of these.
- They first validated that customers were willing to pay more for cosmetically reconditioned cars, confirming the value of CarMax’s service.
- They explored two strategies: highlighting general reconditioning value vs. vehicle-specific reconditioning information.
- Initial experiments with text overlays on images (general approach) for the image gallery failed to meet high engagement thresholds, despite building trust.
This led to the realization that customers needed vehicle-specific information to truly build confidence, which was a much larger, multi-team effort. Victoria’s team effectively used discovery to understand which opportunities were feasible “now” (generalized messaging) versus “next” or “future” (vehicle-specific data). When “now” didn’t pan out, they used their discovery insights to make the case for investing in the harder, long-term solution, balancing immediate progress with strategic groundwork.
FCSAmerica: Balancing Customer Value With Business Needs
Carl Horne’s team at Farm Credit Services of America (FCSAmerica) aimed to increase digital customer engagement. A key challenge was balancing this with the customer’s strong preference for a trusted, high-touch relationship with financial officers.
- They discovered the opportunity: “What can I afford?” which customers were already researching online.
- They experimented with an online calculator and, crucially, tested a chat feature alongside it.
- Their assumption tests showed that customers consistently avoided the chat feature at this stage, indicating the “human touch” wasn’t needed during calculation.
This discovery allowed Carl’s team to reconcile business goals (digital self-service) with customer needs, identifying a specific part of the process (affordability calculation) where digital engagement was preferred. This led to the successful FarmLend program, enabling online loan applications while preserving the critical human relationship later in the process.
Snagajob: Iterating Through Small Opportunities for Big Impact
Amy O’Callaghan and Jenn Atkins at Snagajob focused on improving their Net Promoter Score (NPS). They uncovered a “big, hairy problem”: “I can’t get in touch with my candidates” (only 1 in 10 answered calls).
- Through “walkabouts” and acting as unpaid hiring assistants, they discovered that calling candidates was ineffective due to candidates’ mobile-first, text-preferring persona.
- They quickly pivoted to texting candidates, first asking permission to call, then moving to follow-up questions and scheduling.
- They continuously made small iterations and improvements (e.g., using SurveyMonkey links for follow-up questions, finding web-accessible texting tools) to overcome successive hurdles like interview scheduling complexity and no-shows.
This story illustrates how iteratively tackling small sub-opportunities can lead to significant impact on a large, complex problem. Their relentless focus on one small problem at a time, guided by continuous learning, snowballed into a successful solution that improved candidate-hiring manager communication.
Avoiding Common Anti-Patterns in Managing Cycles
- Overcommitting to an opportunity: Recognize when an opportunity, despite seeming important, isn’t the right fit for your team or company right now. Use quick assumption tests to assess fit and pivot quickly, like Simply Business did with late payments.
- Avoiding hard opportunities: Don’t confuse quick testing/iterative delivery with only easy solutions. Like CarMax, balance quick wins with laying the groundwork for harder, more impactful strategic opportunities that require long-term investment and cross-team collaboration.
- Drawing conclusions from shallow learnings: Don’t abandon a strategy based on superficial insights. Dig deeper to understand nuances and reconcile conflicting data, as FCSAmerica did in understanding when the “human touch” was truly needed.
- Giving up before small changes have time to add up: Many outcomes require a series of small, iterative changes. Like Snagajob, persist in chipping away at successive sub-opportunities; the collective impact will eventually move the needle on the desired outcome.
Chapter Thirteen: Show Your Work
This chapter highlights the critical importance of effectively communicating discovery work to stakeholders, advocating for transparency and co-creation over simply presenting conclusions.
The Pitfall of Jumping Straight to Conclusions
Many product teams, when meeting with stakeholders, focus on presenting conclusions (e.g., roadmaps, release plans, backlogs). This often stems from stakeholders asking for these outputs. The problem is that stakeholders, too, have their own opinions and preferences about outputs, often not grounded in deep discovery. Presenting conclusions invites an opinion battle, which product teams, especially against more senior stakeholders (the “HiPPO”), are likely to lose. This approach fails to share the journey of discovery, creating a disconnect and leading to complaints about stakeholder interference.
Slow Down and Show Your Work
Instead of conclusions, teams should slow down and show their work using visual artifacts, particularly the Opportunity Solution Tree (OST). This helps set the context for how product decisions are made and builds stakeholder confidence.
- Start at the top: Remind stakeholders of the desired outcome and confirm alignment.
- Share opportunity mapping: Explain how the opportunity space was explored and mapped, highlighting top-level opportunities. Ask for missed opportunities and capture suggestions for future vetting.
- Share prioritization: Walk through the decision process of assessing and prioritizing opportunities, explaining why certain paths were chosen. Invite feedback on alternative decisions.
- Deep dive into target opportunity: Help stakeholders fully understand the chosen customer need or pain point using interview snapshots to build empathy. This is crucial for evaluating solutions based on the problem, not just preference.
- Share generated solutions: Present the set of three solutions for the target opportunity. Invite stakeholders to contribute their own ideas and be open to swapping them in for diversity.
- Share story maps and assumptions: If testing has begun, show how each solution works via story maps. Invite stakeholders to add their own assumptions, leveraging their unique expertise to identify blind spots.
- Share assumption map and tests: Present the prioritized assumptions and the plans/results of assumption tests. Ask for feedback and incorporate it.
- Repeat: This is a continuous process. Tailor the level of detail to the specific stakeholder (e.g., weekly updates for a boss, monthly highlights for a marketing manager, top-level summary for a CEO). Even when asked for outputs, always provide the underlying discovery context.
Generating and Evaluating Options
By showing their work, product teams invite stakeholders to co-create rather than just react. Instead of presenting a fixed roadmap, they present potential paths and invite stakeholders to help choose the right path. This collaborative approach fosters greater buy-in and long-term success because stakeholders feel invested in the decision-making process. The goal is to move from presenting a single conclusion to generating and evaluating options together.
Common Anti-Patterns in Stakeholder Management
- Telling instead of showing: Due to the “curse of knowledge,” product teams forget how much context they have. They rush to explain conclusions rather than letting stakeholders follow the logic and reach their own conclusions. Slow down and guide stakeholders through the discovery journey.
- Overwhelming stakeholders with messy details: Filter information based on the stakeholder’s needs. Provide a tailored level of detail (e.g., high-level narrative for a CEO, more depth for a direct manager). Focus on what’s most relevant to their concerns and responsibilities.
- Arguing with stakeholders about why their ideas won’t work: Avoid shooting down ideas directly. Instead, use the discovery framework to show stakeholders where their idea fits (e.g., if it addresses a different outcome or opportunity) or to surface its underlying assumptions through story mapping and assumption identification. Then, share existing data that speaks to those assumptions, allowing them to draw their own conclusions.
- Trying to win the ideological battle instead of focusing on the decision at hand: Don’t get stuck in arguments about the “right way” to do discovery. Focus on the immediate decision and how to achieve the best outcome within the current constraints. Choose battles wisely, demonstrating the benefits of discovery through successful outcomes rather than ideological arguments.
Key Takeaways: What You Need to Remember
Core Insights from Continuous Discovery Habits
- Focus on outcomes over outputs: Success is measured by the value created for customers and the business, not just features delivered.
- Continuous engagement with customers is non-negotiable: Weekly touchpoints allow for real-time learning and informed daily decisions.
- The Opportunity Solution Tree (OST) is your strategic guide: It helps you visually structure opportunities, link them to solutions, and stay focused on desired outcomes.
- Prioritize opportunities, not solutions: Strategic decisions are made in the opportunity space, deciding which customer problems to solve before ideating on how to solve them.
- Test assumptions, not whole ideas: Rapidly testing underlying assumptions is faster and more efficient for mitigating risk than building and testing full solutions.
- Collaboration is key for product trios: Product managers, designers, and engineers must work together, sharing ownership and expertise at every stage of discovery.
- Embrace messiness and iterate: Discovery is not linear; learn from surprises, pivot quickly when assumptions are disproven, and view failures as learning opportunities.
- Show your work, don’t just tell: Transparently sharing the discovery process with stakeholders builds trust, gains buy-in, and fosters co-creation.
Immediate Actions to Take Today
- Identify your product trio: Find a product manager, designer, and engineer to commit to adopting these habits together.
- Start continuous interviewing: Schedule your first weekly customer interview. Even if it’s just 5 minutes or with a proxy, get started.
- Define your desired outcome: Work with your leadership to clarify one product outcome your team will focus on, ensuring it drives a business outcome.
- Map out your customer’s experience: Start drawing a simple experience map of your customer’s current journey related to your outcome.
- Begin opportunity mapping: As you interview, capture needs, pain points, and desires on an Opportunity Solution Tree, starting to group and structure them.
- Generate multiple solutions: For your top opportunity, brainstorm at least 15-20 diverse solution ideas before selecting three to explore further.
- Identify “leap of faith” assumptions: For your top solutions, story map them and conduct quick assumption mapping to pinpoint the riskiest assumptions.
- Design your first assumption test: Plan a small, quick simulation to test one “leap of faith” assumption, defining clear success criteria upfront.
Questions for Personal Application
- What is the single most important outcome my team needs to drive in the next quarter?
- How can I get direct access to one customer next week to hear their story related to our product?
- What implicit assumptions am I making about my current product ideas that I haven’t explicitly tested?
- How can I use a visual tool like an Opportunity Solution Tree to structure our team’s current thinking?
- Who on my team (or adjacent teams) can I partner with to start applying one of these discovery habits?
- How can I share our discovery process, not just our conclusions, with a key stakeholder this week?
- What is one small, reversible experiment I can run this week to test a core assumption?
- How can our retrospectives be improved to focus on learning from discovery surprises?





Leave a Reply