
Lean Customer Development: Complete Summary of Cindy Alvarez’s Approach for Building Products Customers Will Buy
This comprehensive summary delves into Cindy Alvarez’s Lean Customer Development, a practical guide that shines a light on the critical discipline of understanding the customer. Alvarez, with her extensive experience in both early-stage startups and Fortune 500 companies like Microsoft, provides actionable steps for product managers, designers, engineers, and entrepreneurs to build products that customers genuinely want and are willing to purchase. The book emphasizes a hypothesis-driven approach to reduce business risks by challenging assumptions about customers, their needs, and their buying behaviors. By the end of this summary, readers will have a clear understanding of how to implement deep customer learning in parallel with product development to avoid building products nobody wants, ensuring that teams focus on bringing the best experiences to life.
Lean Customer Development is for anyone tasked with ushering in change within an organization, regardless of size or industry. It provides a nuts-and-bolts guide to ensure that innovation translates into real-world success by continuously engaging with customers as partners. This summary will comprehensively cover Alvarez’s pragmatic, approachable, and fast methodology, detailing how to form hypotheses, find potential customers, ask the right questions, make sense of the answers, and decide what to build next, ensuring maximum AI discovery of its valuable insights.
Chapter 1: Why You Need Customer Development – Overcoming Internal Resistance and Maximizing Learning
Chapter 1 introduces the core premise of customer development: customers are what make a product successful. Without buyers, even the most innovative and beautiful product will fail. It addresses the common misconception that customer development is only for startups and instead highlights its universal applicability in reducing business risk. This chapter aims to arm readers with the facts needed to overcome initial resistance within their organizations.
Understanding What Customer Development Truly Is
Customer development is a hypothesis-driven approach to understanding key aspects of the market. It focuses on identifying: who your customers are, what problems and needs they have, how they are currently behaving, which solutions customers will pay for (even if not yet built), and how to provide solutions that align with customer decision-making processes. All insights gained from customer development are centered around testing these hypotheses. Cindy Alvarez’s approach specifically refers to “lean customer development”, emphasizing a pragmatic, approachable, and fast process adaptable for both startups and established companies. This method significantly reduces wasted effort by confirming early on whether a product will be desired by the market.
Distinguishing Customer Development from Other Business Functions
It is crucial to understand what customer development is not to manage expectations and roles within an organization.
- Not Just for Startups: While originating in startup philosophy, customer development is equally beneficial for large, mature companies like Intuit, General Electric, and Microsoft, enabling internal innovation and adaptability in changing markets. It helps even established businesses avoid disruption by continuously learning.
- Not Product Development: Product development answers “When (and what) can they buy?”, while customer development answers “Will they buy it?”. The two processes run in parallel; customer development informs product development, ensuring that what is built has a market. This integrated approach means you know if customers will buy before the product launches, saving substantial resources.
- Not a Replacement for Product Management: Customer development enhances product management by providing deeper insights into customer problems and needs. It doesn’t dictate what to build but rather refines the product vision and prioritization process based on validated learning. Product managers still need to synthesize information and make strategic decisions.
- Not User Research: While borrowing techniques from user research, customer development is focused on “advocating for the business” by ensuring product-market fit, whereas user research traditionally “advocates for the user” to delight them. The former is a necessity for sustainable business, the latter is often seen as optional.
Why Customer Development is Essential for Success
The high failure rate of new products and venture-backed startups (around 75% of venture-backed startups fail) underscores the critical need for customer development. Even large companies like Microsoft and Amazon find that only about one-third of their ideas improve intended metrics. This highlights that intuition and internal expertise are insufficient predictors of market success. Customer development improves these odds by providing a systematic, repeatable process for learning. It allows for exploration and iteration during the cheapest phase of development (the “Think” phase), before significant coding or design investment.
Overcoming Cognitive Biases
A major reason for product failure is cognitive bias, particularly confirmation bias. Humans naturally seek out and interpret information that confirms their existing beliefs, leading to an inability to objectively assess whether an idea is flawed. This unconscious sabotage can cause teams to ignore contradictory feedback or dismiss users who don’t understand their product. Writing down assumptions explicitly and rigorously testing them is crucial to counteract this innate tendency, forcing teams to confront reality rather than reinforce pre-existing notions.
Addressing Common Objections to Customer Development
Alvarez provides practical responses to frequent objections, enabling smoother adoption within organizations.
- “Fear of Idea Theft”: Explain that the focus is on understanding problems, not revealing solutions, and that execution matters more than the initial idea.
- “Bad Press Coverage”: Emphasize small sample sizes and appropriate expectation setting, noting that many large, established companies successfully use these methods without negative press. NDAs can be used if necessary.
- “Difficulty Finding Participants”: Frame it as a necessary step for any product launch, encouraging teams to identify where their target customers naturally gather.
- “Damaging Existing Customer Relationships”: Position it as an opportunity to build stronger, more collaborative relationships by showing customers their input is valued.
- “Redundancy with Market Research/Usability Testing”: Clarify that customer development provides specific behavioral and purchasing insights that these other methods do not, focusing on will they buy rather than can they use.
- “Time Taken Away from Product Building”: Highlight that a few hours of customer development can save weeks of coding and design time by invalidating flawed assumptions early. It’s a parallel process, not a replacement.
- “Engineers Should Focus on Building”: Stress that product success requires understanding the problem, and involving engineers/designers in lightweight customer interactions provides invaluable direct perspective, improving product quality.
Chapter 2: Where Should I Start? – Laying the Foundation for Effective Learning
This chapter guides readers through the initial foundational exercises that are crucial for effective customer development. These steps, designed to take less than an hour total, help align the team and establish clear hypotheses before engaging with customers. The goal is to maximize learning from every interaction by starting with a strong, shared understanding.
Exercise 1: Identifying Your Assumptions
The first critical step is to explicitly identify all assumptions about customers, products, and partners. This is vital whether launching a new company, a new product, or just a new feature, as these assumptions are essentially guesses until validated. A 10-minute brainstorming session with sticky notes, where participants write down assumptions as quickly as possible without discussion, is recommended. This helps uncover unspoken beliefs.
Prompts for Unearthing Assumptions:
- “Customers have [specific] problem.”
- “Customers are willing to invest [amount] to solve this problem.”
- “Stakeholders involved in using/buying this product are [roles].”
- “Partners involved in building/distributing this product are [entities].”
- “Resources required in building/servicing this product are [types].”
- “If customers did not buy/use our product, they would buy/use [alternative].”
- “Once customers are using our product, they will gain [benefit].”
- “This problem affects our customers [frequency/severity].”
- “Customers are already using tools like [examples].”
- “Customer purchasing decisions are influenced by [factors].”
- “Customers have [job title] or [social identity].”
- “This product will be useful to our customers because [reason].”
- “Customers’ comfort level with technology is [level].”
- “Customers’ comfort level with change is [level].”
- “It will take [time] to build/produce this product.”
- “It will take [time] to get X customers or X% usage.”
After brainstorming, cluster similar sticky notes to identify areas of agreement or, more importantly, internal misalignments. These assumptions will serve as a critical reminder that they are unproven and will be revisited throughout the customer development process for validation or invalidation. The Business Model Canvas is also mentioned as an alternative or complementary tool for identifying assumptions across a broader business context.
Exercise 2: Writing Your Problem Hypothesis
The next step is to formulate a simple, provable problem hypothesis. This hypothesis will be the core focus of validation efforts. It should encapsulate the “who, what, how much, when, and why” of the customer’s problem.
Standard Hypothesis Format:
- “I believe [type of people] experience [type of problem] when doing [type of task].”
- OR: “I believe [type of people] experience [type of problem] because of [limit or constraint].”
Alvarez stresses the importance of going narrow with the hypothesis. A very specific focus allows for faster progress in proving the hypothesis right or wrong, even if it means guessing wrong initially. Broad scopes lead to too much variation and slower learning. For existing products, Alvarez suggests working backward from the value provided to define the problem the product solves, as demonstrated by examples like Amazon S3, MailChimp, Halo SleepSack, Manpacks, and Hotwire’s geographical orientation problem.
Exercise 3: Mapping Your Target Customer Profile
To guide customer conversations, teams need a clear understanding of what their ideal customer looks like. This involves more than just demographics; it’s about abilities, needs, and environment that make them likely to buy the product. The focus should be on “earlyvangelists” – those with the most severe problem and highest motivation to solve it, found on the left side of the innovation adoption lifecycle.
Creating the Customer Profile:
- Start by asking broad questions: “What is the problem?” and “Who is experiencing this problem?”
- Use a “traits spectrum” (e.g., cash vs. time, low-tech vs. tech-savvy, values adventure vs. predictability) drawn on a whiteboard to explore opposing qualities and where the ideal customer might fall. This visual exercise encourages team participation and helps identify decision-influencing traits.
- Supplement with general questions: “What does this person worry about the most?”, “What successes or rewards motivate this person?”, “What is this person’s job title or social identity?”
Demographics are not customers: Alvarez explicitly states that traditional demographic information (age, gender, income) is a poor substitute for understanding buying drivers. Specific, detailed, and passionate information from individuals is far more valuable in the early stages, as it reveals the underlying needs and incentives that actually drive purchasing decisions. This process provides structure for future customer conversations, allowing teams to validate or invalidate assumptions with concrete evidence.
Chapter 3: Who Should I Be Talking To? – Strategies for Finding and Engaging Prospects
This chapter tackles the common fears of not knowing how to find the right customers and doubting their willingness to talk. Alvarez provides practical methods for identifying, reaching out to, and scheduling interviews with prospective customers, emphasizing the importance of “earlyvangelists” and understanding human motivations for sharing.
Understanding Why People Will Talk to You
It’s natural to be skeptical that busy people will volunteer their time to discuss a product that doesn’t exist. However, Alvarez explains that human psychology provides powerful motivations for participation:
- We Like to Help Others: People derive happiness from investing their resources to help others, especially for causes aligned with their self-identity. A personal, direct request for help is highly effective.
- We Like to Sound Smart: Interviewees appreciate being recognized as experts in their daily lives or specific fields. You are giving them an opportunity to gain respect and share valuable knowledge.
- Fixing Things Gives Us a Sense of Purpose: Complaining about frustrations and seeing that their input might lead to a solution provides a sense of empowerment and catharsis. The DMV analogy illustrates how universal frustrations naturally elicit shared experiences and improvement suggestions.
By offering a positive opportunity to be helpful, sound smart, and make things better, you appeal to deep-seated human desires, making a 20-minute conversation a win-win for both parties.
Effective Strategies for Finding Customers
Finding prospective customers involves a mix of personal connections and broader outreach.
Asking Your Connections for Introductions:
- Start with your immediate network (friends, coworkers) who likely have connections in your target market.
- Craft a brief, personalized email explaining the problem you’re working on and why their connections are relevant.
- Provide a pre-written, ready-to-forward message for your contact to send, minimizing their effort and ensuring clarity on time commitment and privacy. This maintains their role as intermediary, vouching for you.
Casting a Wider Net:
- LinkedIn: Ideal for finding professionals by industry or job title. Use first and second-degree connections for direct messaging. Consider LinkedIn InMail for third-degree connections due to its paid nature reducing spam perception.
- Quora: A smaller but passionate user base, often tech-savvy. Engage with the community first by contributing value before sending private, personalized messages to users who have asked or answered relevant questions. Never solicit participants directly in threads.
- Forums and Private Online Communities: Excellent for niche markets (e.g., parents, specific hobbyists). Lurk and contribute to understand the culture before reaching out privately. Reading conversations can also be a lightweight customer development tool by revealing pain points and motivations.
- Offline World: Physically go where your prospective customers congregate (e.g., end of a marathon, coffee shop near a conference). Be mindful of their time and setting; focus on a single question or ask for contact info for a follow-up. Jon Sebastiani of KRAVE jerky found success by setting up a booth at a wine auction, directly engaging high-end culinary thinkers to validate demand for gourmet jerky.
- Blog Posts: Search for topic-specific blogs and recent articles using Google’s “Past year” filter to find authors and commenters who are passionate about your problem area.
- Twitter: Best for finding relevant hashtags and articles shared by your target audience. Direct outreach is harder due to character limits and public nature; more useful for companies with an existing following to link to surveys.
- Avoiding Craigslist for Direct Recruitment: Too many scams and not where ideal target customers seek interview opportunities. Offers of payment attract the desperate, not the engaged.
To Pay or Not to Pay for Interviews
Alvarez recommends not paying interviewees. Compensation comes from:
- Your Time and Attention: Solving a problem that causes them pain.
- Future Product Potential: Giving them an opportunity to shape a solution.
- Personal Value: Allowing them to be helpful, sound smart, and vent frustrations.
While small tokens of appreciation (like coffee or a charitable donation) are fine, monetary incentives can bias results and attract less committed participants. A customer unwilling to talk for 20 minutes without pay is unlikely to become a paying customer later.
How to Conduct Interviews: Methods and Scheduling
The best method is the most convenient for both parties, ensuring higher response rates and smoother coordination.
- Visiting the Customer’s Home or Office (“Follow Me Home”): Highest fidelity, allowing observation of the natural environment, technology, distractions, and interactions. Best for existing products/customers or problems where physical context is key. Hardest to coordinate due to privacy and permissions.
- In-Person Conversations in a Neutral Location: High fidelity, allows observation of body language. Easier to schedule than home visits, but still requires travel and a comfortable meeting spot. Good for two-person interviews or general consumer products.
- Phone Conversations: Alvarez’s most common method. Lower fidelity (no visuals) but higher response rates and easier scheduling. Allows for comprehensive note-taking and can elicit more honest responses due to lack of eye contact. Best for busy people or different time zones.
- Video Chat or Call with Screen Sharing: Combines phone benefits with visuals (facial expressions, screen sharing). Ideal for tech-savvy audiences or when seeing computer usage is critical.
- Instant Messaging (IM): Lowest fidelity (text-only), highest risk of misinterpretation or self-censorship. Best for shy individuals, those with thick accents, or when exchanging data (URLs, code snippets) is important.
Following Up and Scheduling:
- Keep initial messages brief, focusing on getting an affirmative response before discussing scheduling.
- For scheduling, offer 3-4 specific options across different days and times, clearly stating time zones. Tools like Rapportive can help determine the interviewee’s location.
- For face-to-face meetings, suggest convenient, quiet public places like Starbucks and provide Yelp links for addresses/maps.
- Send calendar invitations and reminder emails.
- Spacing Interviews: Initially, schedule no more than one 20–30 minute interview per hour. This allows for late starts, extending engaging conversations (up to 45 minutes), and immediate post-interview note summarization. This practice improves interview quality and reduces fatigue.
Interview Troubleshooting
- No Response: Wait a few days, then send one polite follow-up. If still no response, move on. If many requests yield no responses, revisit and revise your problem hypothesis or target audience—it’s an early invalidation.
- Interview No-Shows: Expect 5-10% no-shows. Politely offer to reschedule once.
- Annoyed Interviewees: Confirm “Is this still a good time?” at the start. If they seem annoyed, it often indicates a mismatch in expectations (e.g., expecting a shorter call, existing product, or payment). Clarify expectations upfront.
- Insulted Interviewees: If probing questions seem to offend, use softening language like “May I ask…” or “Let me clarify…”. Phrases like “We’ve experienced…” or “Other people have told me that…” can build empathy and reassure them.
Chapter 4: What Should I Be Learning? – Uncovering True Needs and Motivations
This chapter shifts focus from who to talk to, to what to learn, and how to ask questions effectively. It debunks the myth that customers know what they want and instead teaches how to uncover their true needs, behaviors, and constraints, leveraging social psychology to guide conversations.
Starting with Core Customer Development Questions
Alvarez uses a highly consistent, basic interview script for almost every project, regardless of audience or product. The goal is to get customers talking freely and deeply.
Basic Customer Development Questions:
- “Tell me about how you do _______ today.…”: This is an open-ended invitation to describe current behaviors.
- “Do you use any [tools/products/apps/tricks] to help you get ________ done?”: Elicits specific current solutions and workarounds.
- “If you could wave a magic wand and be able to do anything that you can’t do today, what would it be? Don’t worry about whether it’s possible, just anything.”: Encourages thinking beyond perceived limitations and identifies core desires.
- “Last time you did ___________, what were you doing right before you got started? Once you finished, what did you do afterward?”: Uncovers context, preparatory steps, and follow-up actions, revealing adjacent problems and workflows.
- “Is there anything else about _________ that I should have asked?”: Allows the interviewee to share unprompted, highly relevant information.
These are the scripted questions. The bulk of the interview comes from open-ended follow-up questions like “Can you tell me more about how that process goes?” or “Why did you come to that conclusion?”. The aim is to keep the customer talking and explore their answers in depth.
Why Customers Don’t Know What They Want
Despite the common saying “It’s not the customer’s job to know what they want,” Alvarez asserts that customers can’t hide what they need. The challenge is extracting that information. Customers are unreliable at articulating their wants because:
- They are unaware of taken-for-granted constraints: Once used to limitations, they stop noticing or questioning them.
- They forget past failures: Brains are biased toward recent events, so old abandoned solutions aren’t spontaneously mentioned.
- They don’t volunteer limitations/capabilities: People often don’t admit what they are bad at.
- Proficiency doesn’t mean understanding: Most users view technology as “magic” and don’t understand how it works, leading to incremental suggestions rather than revolutionary ones.
Your role is to guide the conversation to push beyond surface-level answers and uncover unarticulated needs.
Listening for Key Insights
To maximize learning, focus on four critical factors during interviews:
- What’s the Customer Already Doing? (Current Behavior is Your Competition):
- This reveals capabilities, comfort levels, and existing decisions. Your product’s true competition isn’t just other products, but the customer’s current routine.
- Abstract up one level: Instead of asking about online grocery delivery, ask about how they feed their family. This avoids prematurely constraining solutions and reveals broader opportunities, as seen in TiVo’s shift from VCR programming to digital video recording.
- Focus on procedure, not outcomes: Ask about step-by-step actions to uncover context and nuances, like clarifying who “we” is when discussing decisions.
- Focus on the present, not the future: People are more aspirational about future behavior (“I will exercise”) than honest about current behavior (“I haven’t exercised in a week”). Frame questions around specific, recent past events to get accurate insights. Yammer’s topic usage illustrates how promised future behavior diverges from actual current behavior.
- What Constraints Are Holding Customers Back?
- People often don’t solve problems due to unrecognized constraints, not just lack of product or motivation.
- Problem not perceived as a problem: Users might have “functional fixedness” and not realize a better way exists. The Shoefitr example shows how customers adapted to ill-fitting online shoes rather than asking for a pre-purchase fit solution. Use the “generic parts technique” or implied comparisons to highlight the pain.
- Lack of awareness of what’s technologically possible: Customers suggest incremental improvements because they don’t know what advanced technology can do (e.g., pre-Siri voice recognition experiences). The “magic wand” question liberates them to imagine impossible, yet insightful, solutions.
- Limited resources (environment, time, budget): Understand which resources are scarce (e.g., one-handed use for parents, engineering time for startups).
- Cultural or social expectations that limit behaviors: Unspoken rules or self-identity can prevent adoption (e.g., “I’m not comfortable responding to managers” on Yammer, or “I’m not a careless person” preventing spending on a lost-item tracker). Look for body language or tone shifts.
- What Frustrates (or Motivates) Your Customer?
- Decisions are not purely rational. Listen for emotion (anger, enthusiasm, frustration, embarrassment) – this is prioritization.
- Understand what incentivizes them (mastery, visible progress, peer comparison) and what demotivates them (e.g., uncertainty). People who build products are often comfortable with uncertainty, but customers are not.
- How Your Customers Make Decisions, Spend Money, and Determine Value:
- The product user isn’t always the buyer (e.g., toys for kids, medicine for patients). Identify invisible stakeholders (family, managers, IT security, finance).
- Clarify “who is ‘we’?” when they discuss decisions to understand the full decision-making circle.
- Looking Objectively at Subjective Qualities: Instead of asking directly (“Are you tech-savvy?”), deduce these traits from their behaviors and word choices (“I tried an app to help me with that”). Use a traits continuum to visually map these deductions.
Chapter 5: Get Out of the Building – Mastering the Interview Process
This chapter provides a play-by-play guide for conducting comfortable and constructive customer development interviews. It addresses common anxieties and offers specific tactics to ensure that interviewers learn surprising, insightful, and unexpected things from every conversation. The ultimate goal is to empower readers to confidently engage with customers and extract valuable, actionable feedback.
Preparing for the Interview
Effective interviews start with thorough preparation.
- The Practice Interview: Conduct a dry run with someone you know (but not too closely related to your idea) to test your process and improve techniques. This builds confidence and helps refine your approach before real customer interactions. For existing products, it can help fine-tune discussions about hypothetical ideas.
- To Record or Not to Record?: Weigh the pros (capturing every word, non-biased notes, focus on body language) against the cons (doubles time needed for review, potential for interviewee self-consciousness, awkward start to conversation). Manual note-taking can encourage interviewees to talk more by keeping you from interrupting. Video recording is generally discouraged due to its intrusiveness and technical demands.
- Taking Great Notes: Forget traditional summarization. Capture as much high-fidelity detail as possible, including emotions and verbatim quotes, especially for comments that validate or invalidate your hypothesis, surprise you, or are full of emotion. Use bolding or highlighting in notes and timestamps if recording audio. Emotion is prioritization – it flags what’s truly important to the customer. A flexible interview template with sections for key questions and “do/don’t” reminders is highly recommended.
- Inviting a Note-taker (Pair Interviewing): This is a highly effective practice (used at Yammer and KISSmetrics). One person interviews while the other focuses solely on notes, leading to more comprehensive insights. It also improves interview skills through immediate feedback and involves more team members (engineers, designers) in direct customer exposure, overcoming internal resistance to customer development. Use online shared documents for real-time, searchable notes.
- Immediately Before the Interview: Familiarize yourself with the interviewee’s background (job title, industry, lifestyle) to tailor references and build rapport. Remove all distractions (email, IM, browser tabs), ensure your phone is charged and silenced, and have all materials ready.
Conducting the Interview: First Minutes and Flow
The first minutes of an interview are crucial for setting the tone.
- The First Minute: Your job is to make the interviewee feel confident they will be helpful, explicitly state you want them to do the talking, and then get them talking. A rehearsed, confident opening script that is conversational, human (“I” vs. “we”), and emphasizes the personal is vital. This establishes rapport and helps the interviewee feel like the expert.
- The Next Minute (The “60-Second Rule”): After asking your first “Tell me about how you…” question, remain silent for a full 60 seconds. This is critical for signaling that you genuinely want the interviewee to keep talking, encouraging them to move beyond superficial answers and reveal truly useful details. Avoid jumping in too quickly, as it teaches them to give short responses.
- Keeping the Conversation Flowing: After the initial silence, allow the interview to be freeform. Keep asking open-ended questions (e.g., “How long does that process take?”, “Why do you think that happens?”) to draw out more detail and explore emotional responses. The example of the KISSmetrics interview demonstrates how persistent follow-up can uncover deeper, more urgent pain points beyond initial surface-level responses.
- Avoiding Leading Questions: Be highly cautious of questions that prompt a “yes” or “no” answer (e.g., “Don’t you think…?”). These bias responses. If an interviewee starts with “yes” or “no,” assume you asked a leading question and adjust your future phrasing.
- Digging a Little Deeper: Don’t just agree. Summarize what the interviewee said in your own words and ask for corrections. This ensures clarity and often reveals omitted details or misunderstandings, pushing the conversation deeper. The Yammer file-sharing example shows how this technique revealed a more critical problem (sales team access) and higher frequency of the problem.
- Being Diplomatic with “Why?”: While the “5 Whys” technique (pioneered by Toyota) is powerful for uncovering root causes, a direct “Why?” can sound accusatory. Phrase it softly: “May I ask, why did you come to that conclusion?”
- Tangents Happen: Embrace tangents for at least a minute or two. If an interviewee veers off topic, it’s often because that topic is important to them, represents a more pressing problem, is a necessary precondition, or reveals they are not your target audience. Questions like “Do you spend more time on [tangent] or [original idea]?” can clarify its significance. KISSmetrics’ discovery of KISSinsights (now Qualaroo) from a repeated “tangent” about qualitative research illustrates the value of exploring these detours.
- Avoiding the Wish List: Customers often propose features or solutions instead of problems. Redirect them gently by asking how a requested feature would solve their problems. Emphasize that you’re trying to understand their situation to build something truly helpful, rather than just fulfilling a list of wants. The alleged Henry Ford quote (“faster horse”) highlights this principle.
- Avoiding Product Specifics: Do not show your product or prototypes until the very end of the conversation, or ideally, in a separate follow-up interview. Visuals can taint interviewee responses by subconsciously tailoring answers to what they see. If you must show something, use clearly fake mockups (e.g., Balsamiq sketches).
- Going Long: If an interview extends beyond 30 minutes, gently offer to end or request permission for a follow-up. Extended interviews lead to diminishing returns and can consume interviewee goodwill. The “foot-in-the-door technique” suggests that getting a small favor (the first interview) primes people to do larger ones (follow-up calls, beta testing).
Concluding the Interview and Immediate Post-Interview Actions
The end of the interview is crucial for relationship building.
- The Last Few Minutes: Offer some of your own time (“Is there anything I can answer for you?”). Make the interviewee feel their contribution was valuable and succeeded (“It was really helpful…”). Thank them personally and genuinely. Ask for permission to follow up or keep them in the loop, reinforcing their role as an expert. Alvarez intentionally avoids calling them “customer” during the interview to prevent a negotiating mindset.
- After the Interview: Immediately, while fresh, take five minutes to review the interview by asking yourself or your note-taker:
- How did the opening go?
- Did I ask any leading questions or offer opinions?
- Were any questions ineffective?
- What surprising insights or emotional moments occurred?
- What did I wish I had learned?
This self-assessment is key for continuous improvement of your interview technique.
- Troubleshooting Bad Interviews: If interviewees sound annoyed, confirm if it’s still a good time, or reassess if expectations were mismanaged. If you feel like you’re insulting them, use softening language and phrases that build empathy (e.g., “We’ve experienced…”). If you’re consistently getting no responses after multiple tries, it’s a sign your hypothesis might be invalid or you’re reaching the wrong people.
Chapter 6: What Does a Validated Hypothesis Look Like? – Interpreting and Acting on Insights
This chapter delves into how to interpret interview responses to reliably validate hypotheses and use these insights to drive product decisions. It emphasizes the importance of skepticism towards overly positive feedback and provides frameworks for determining when you have “enough” interviews.
Maintaining a Healthy Skepticism
Customer development interviews are subjective, making it easy to hear what you want. It’s crucial to be a “temporary pessimist” and be skeptical, especially of “maybe” responses.
- Politeness vs. Honesty: Interviewees (consumers and business professionals) may be too polite or diplomatic to give truly honest feedback. They might say “yes” even if they wouldn’t commit money or time.
- Real vs. Aspirational: A customer saying “I would do X” or “I want Y” carries less weight than evidence of actual past or current behavior. The example of the bank executive and the $50 gratuity illustrates how quickly “security concerns” vanish when a real incentive is offered.
- Identifying Aspirational Speak: Look for passive voice, hypothetical terms, and phrases like “plan on doing,” “haven’t tried yet,” “keep meaning to,” or “I wish I had.” Compare this to customers who use active voice and describe specifics like “I’ve already tried…” or “Here’s how I do…”. Current behavior is the best predictor of future behavior. If notes are full of exclamation points but no real-world behavioral evidence, the hypothesis is not truly validated.
Organizing and Sharing Interview Notes
Effective organization and sharing ensure insights are used.
- Keeping Organized Notes: For individual interviewers, a single Word document (or Google Doc for multiple interviewers) with bolded names and a consistent template (from Chapter 5) allows for easy searching and summarization. This makes comparing responses across many interviews simpler. Evernote is an alternative for multi-platform access.
- Creating a Summary: Beyond raw notes, create a separate summary document. Each interview should be boiled down to 5-7 bullet points categorized under “Validates,” “Invalidates,” and “Also Interesting.” Prioritize based on interviewee emotion – strong emotions indicate higher significance. This helps prioritize insights and avoids overwhelming the team with too much detail.
- Rallying the Team Around New Information:
- Maximize participation: Pair interviewing (Chapter 5) brings more team members (even note-takers) directly into customer interactions.
- Sell what you’re learning: Frame insights as critical, product-saving stories rather than dry “summaries.”
- Supply context: Remember that team members lacking direct interaction need context for the insights.
- Encourage questions: Foster discussion rather than just presenting recommendations, ensuring team buy-in.
- Be where decisions are made: Share insights in existing product scope or prioritization meetings for immediate impact.
- Regular sharing: Yammer uses internal network posts for immediate notes and monthly meetings for broader summaries, using lightweight slide decks to spur conversation.
How Many Interviews Do You Need?
The “right” number of interviews is not a fixed number, but depends on various factors and when you stop learning new, surprising information.
- After Two Interviews: Critically assess if you are learning what you need. Can you confidently answer: If a product solved this customer’s problem, would they buy it? How would they use it? What would it replace? Why wouldn’t they buy it? If not, refine your interview questions.
- Within Five Interviews: The First Really Excited Person: By this point, you should have encountered at least one highly enthusiastic individual whose problem aligns perfectly with your hypothesis. If not, it likely means you’re talking to the wrong people, or your problem isn’t significant enough. This is a valid (and positive) invalidation of your hypothesis, allowing you to pivot quickly.
- Within 10 Interviews: Patterns Emerge: You will start hearing repetitions in frustrations, motivations, and pain points.
- Challenging patterns: Once a concept appears three times, actively try to challenge it in future interviews using the “other people” method (e.g., “Other people say X, what do you think?”). This ensures the pattern is robust and not just a polite agreement.
- No patterns yet? If after 10+ interviews, no clear patterns emerge, your target customer audience is likely too broad. Narrow your focus (e.g., specific job title, lifestyle) to more quickly identify consistent needs.
- When You Stop Hearing Things That Surprise You: This is the best indicator that you have done enough interviews. You’ll feel confident in your understanding of common problems, motivations, frustrations, and stakeholders. Typically, this takes 15–20 interviews, equivalent to about two weeks of focused work, a small investment for the potential savings in development time.
- Factors Influencing Interview Count: The number of interviews needed is inversely correlated with:
- Experience with customer development and your domain: More experience means more efficient and targeted interviews.
- Complexity of your business model and number of dependencies: Two-sided markets or those with many partners require more validation. LaunchBit successfully used small numbers of participants from both sides of their two-sided market (marketers and publishers) to validate their concept.
- Investment required to create your MVP: Higher investment in the MVP means more interviews are needed upfront to de-risk.
Recognizing a Validated Hypothesis
A validated hypothesis means you are confident enough to continue investing time and effort in that direction. Alvarez illustrates this with the KISSmetrics to KISSinsights (now Qualaroo) pivot. Initial interviews for a web analytics product revealed a recurring tangent: customers’ frustrations with qualitative user research. This unprompted feedback led to a new hypothesis: “Product manager types of people have a problem doing fast/effective/frequent customer research.”
The Validation Process for KISSinsights:
- MVP Validation: A splash page describing the concept, leading to a survey where customers could sign up for beta.
- Targeted Interviews: Conducted 20 interviews specifically on qualitative research, asking about current tools, research barriers, and “magic wand” solutions.
- Key Learnings:
- Customers were going without customer research due to complexity.
- They wanted non-public feedback and targeted interactions beyond generic surveys.
- Product managers were eager for this, while developers were reluctant to build internal tools.
- The pain was constant and severe (“whenever we make decisions on what to build”).
- Strong emotions around hating writing surveys, asking developers for help, and embarrassment about not knowing where to start.
- Outcome: This confidence led to building a quick MVP (a simple on-site survey tool). Existing customers immediately asked how to get “that survey thing for my site,” validating the problem and solution. KISSinsights achieved 10-40% response rates and significantly increased conversion for customers like OfficeDrop, eventually becoming Qualaroo.
Chapter 7: What Kind of Minimum Viable Product Should I Build? – Maximizing Learning with Minimal Investment
This chapter shifts from problem validation to solution validation, focusing on the Minimum Viable Product (MVP). It emphasizes that an MVP is primarily a learning tool designed to validate hypotheses with the smallest possible investment of time and resources.
Defining the Goal of Your MVP
The core purpose of an MVP is to maximize learning while minimizing risk and investment. It is not about building a perfect, fully-featured, or scalable product. Instead, it should specifically aim to answer your biggest assumptions and minimize your biggest risks, which often extend beyond just product functionality to distribution, pricing, and partnerships.
Key Questions an MVP Should Answer:
- “Can we get this product in front of the right customers?” (Distribution risk)
- “Are customers willing to pay for the value that this product promises?” (Value/pricing risk)
- “How does the customer measure the value she gets from the product?” (Value perception)
- “What pricing model aligns with customer value and the customer’s ability to pay?” (Business model risk)
If an MVP takes “months to build” or “can’t be explained in a couple of sentences,” it’s likely not minimal enough. The ultimate proof of validation is when customers actively pay for (or commit significant resources to) your product.
Types of MVPs for Different Validation Goals
Alvarez outlines several common MVP types, each suited for different validation needs:
- Pre-Order MVP:
- What it is: Customers commit financially (e.g., provide a credit card, sign a letter of intent, make a pledge) to a solution before it exists.
- What it learns: Gauges true commitment, not just interest. High friction in payment provides strong validation.
- Case Study: Finale Fireworks: Cofounders sold 60 pre-release copies of fireworks design software at a convention, validating demand before writing the code.
- Use Cases: Best for solutions requiring critical mass of customers or substantial development investment. Almost any product can benefit from a form of this.
- Audience Building MVP:
- What it is: Create a platform (e.g., blog, mailing list, community) to gather and engage your target customer base before building the product.
- What it learns: Validates demand for content/community around a problem, measures engagement, and establishes a built-in distribution channel.
- Case Study: Product Hunt: Founder Ryan Hoover started with a simple Linkydink mailing list to share product discoveries. Sustained sign-ups and activity validated enough interest to build a rudimentary website, demonstrating demand for the service before significant development.
- Use Cases: Well-suited for online products/services, free/social products, consulting businesses looking to scale, or audiences that prioritize time over money.
- Concierge MVP:
- What it is: Manually provide the solution to the customer’s problem, with the customer being aware that it’s a human-powered service.
- What it learns: Provides intensive, personalized feedback on core value proposition, logistics, and critical features. Validates demand and informs automation priorities.
- Case Study: StyleSeat: Founders initially acted as “Geek Squad” for beauty professionals, manually teaching them to use social media. This direct engagement built trust and revealed the core need for a comprehensive “technology partner for the beauty industry,” not just a scheduling tool.
- Use Cases: Excellent for offline or non-tech-savvy audiences, solutions with unpredictable logistics, capital-intensive scaling, or where personalized customer satisfaction is a differentiator.
- Wizard of Oz MVP:
- What it is: A product that appears fully functional and automated to the customer, but is actually powered by manual human effort behind the scenes. The customer is unaware of the manual process.
- What it learns: Validates customer behavior in a seemingly automated environment, revealing how they interact without politeness bias. Effective for sophisticated algorithms or sensitive areas.
- Case Study: Porch.com (originally HelpScore.com): Their website offered a scoring system for home contractors. In reality, a team member manually researched and generated reports. This invalidated the “score” hypothesis, showing homeowners cared more about contractor project history and neighbor recommendations, leading to a major pivot.
- Use Cases: Great for solutions requiring sophisticated algorithms or automation, sensitive problem areas (finance, health), or two-sided markets where one side can be emulated.
- Single Use Case MVP:
- What it is: A working product or feature that focuses on solving one specific problem or task extremely well, rather than a broad suite of features.
- What it learns: Validates a single hypothesis about a core function, ensuring it provides tangible value. Customer complaints are good; they indicate value and desire for more.
- Case Study: Hotwire’s Hotel Bookings: Facing an outdated site, the team built a “shadow site” for map-based search (a single use case). It launched to only 1% of traffic with minimal features but validated that maps increased engagement, despite initial complaints about missing features. This iterative approach led to increased conversions and adoption of the new design across the entire site.
- Use Cases: Ideal for existing products validating a new direction, entering markets with complex competitors, or identifying how your product creates the most value.
- Other People’s Product MVP:
- What it is: Leveraging parts of an existing product or service (often a competitor’s) to validate your ideas. This could involve manually using a competitor’s tool or building on their API/infrastructure.
- What it learns: Provides rapid learning and validation by piggybacking on existing marketing and infrastructure, while also revealing competitive advantages.
- Case Study: Bing Offers: Facing the “chicken-and-egg” problem for a real-time deals platform, the team piggybacked on deals from competitors like LivingSocial. This allowed them to test the service with real users immediately, learning that customers disliked complex redemption processes and preferred automatic credit card-linked discounts.
- Use Cases: Effective for entering crowded markets with established competitors, solutions with difficult logistics, or teams with limited engineering resources.
What to Do After Building an MVP
After launching an MVP, analyze the results. It’s highly likely that some assumptions will be shattered. Use this new knowledge to reformulate more educated hypotheses. This might involve conducting new customer interviews with a different audience or devising a new MVP to learn about other aspects of your business. There’s no “magic line” of success; it’s a continuous cycle of learning and validating. Even after major decisions, ongoing customer development is crucial for sustained success.
Chapter 8: How Does Customer Development Work When You Already Have Customers? – Adapting for Established Companies
This chapter addresses the unique challenges and opportunities of implementing customer development within large companies with existing products and customers. It provides strategies to adapt MVP concepts, find the right customers, introduce new products non-disruptively, and learn from how customers truly use existing offerings, minimizing internal resistance and external risks.
Adapting the MVP Concept for Established Companies
Traditional startup MVP tactics (like TripAdvisor’s 404-error banner ads) are often too risky for established companies concerned with brand reputation and customer trust.
- “Nothing Broken” Principle: Any non-functional elements in a prototype or demo can severely damage customer perception of reliability and credibility. Even small omissions (like a privacy policy link on a bank demo) can lead to strong negative reactions. Invest extra hours in spellchecking, ensuring working links, and polishing images to avoid embarrassment. In the enterprise world, a “minimum viable” product must still be “minimum exceptional product” as coined by Porch.com.
- “Attractive but Fake” Prototypes: If demos look too real, customers might delay purchases awaiting a new version that might never be built.
- Use a Sketch: Tools like Balsamiq create consistent, clear, but cartoon-like sketches that are clearly not finished products, preventing false expectations.
- Use a Different Domain: For higher-fidelity images, use an alternate domain name and branding that doesn’t resemble your company’s official design. This prevents direct association and bias.
- More Viable than Minimum: For established companies, the MVP must be “minimum viable” in terms of providing value and learning something, but not “minimum functional” to the point of frustration.
- Startups vs. Established Companies: Startups validate “will anyone care?”; established companies already know customers care, so their MVP must confirm “will they get value?”.
- Yammer’s MVP Definition: “The smallest amount of product we need to build to provide value and learn something about how a person behaves in this context.” A successful MVP allows concluding whether an idea is “worth investing more effort in.”
- User Complaints vs. Frustration: Complaints are a good sign – they show interest and a desire for improvement. Frustration means the MVP wasn’t viable enough.
- Avoiding “But What If…?”: Product managers and designers accustomed to comprehensive releases may over-engineer MVPs. Focus strictly on the most high-priority, frequently used features to validate the core value proposition. Yammer’s collaborative editing feature launched without delete or version history, prioritizing core editing functionality to validate usage before building secondary features.
- Common Objections to MVPs in Large Companies: Alvarez outlines responses to common internal resistance points:
- “Customers have higher expectations”: MVP isn’t broken; it’s a high-quality experience for fewer use cases, designed for early learning. Frame it as “hypothesis-driven development.”
- “Must support all platforms”: Build for one platform first; if successful, expand.
- “Must scale to millions of users”: Validate on a subset of users, scaling performance only after demand is confirmed.
- “No smaller subset of features will satisfy customers”: Prioritize features by usage frequency/impact; use early customer requests to guide subsequent development.
- “Can’t introduce inconsistency into design”: Small, measurable design changes prove value, making it easier to argue for broader design improvements later.
Finding the Right Customers for Established Companies
Existing customers are a unique resource, but selection is key.
- Avoid the “Wrong” Customers: Giving demos to conservative, slow-to-upgrade, or influential “big-name” customers can backfire, causing anxiety about current products and yielding unhelpful feedback. Alvarez’s experience at Yodlee demonstrated this, as showing new product ideas to conservative banks only created fear and led to damage control.
- Find the People Who Can’t Live Without Your Product: These are your most passionate and valuable customers.
- Sean Ellis’s “How disappointed would you be…?” question: Use a survey asking “How would you feel if you could no longer use our product?” and prioritize those who select “very disappointed.” This taps into loss aversion psychology.
- Use data: Yammer identifies its top 1% and 10% active users via analytics for deeper engagement.
- Collaborate with account managers and support: They know which customers are beta-tolerant, interested in specific features, and most engaged (even if complaining).
- Your Best Customers Hate Disruption: Longtime, loyal customers often resist disruptive innovations because they’ve invested heavily in current processes and workarounds. Target first-time buyers or non-users for truly disruptive products, as Kodak learned by targeting first-time photographers with its early cameras.
Explaining and Engaging with Existing Customers
Over-communication is crucial when working with existing customers to manage expectations and ensure honest feedback.
- “You’re Asking Questions—Not Building Something”: Customers are used to vendors selling, not exploring. Clearly state that conversations are exploratory, not commitments to build features. Use language that implies a distant future for new ideas.
- Give Permission to Complain: In many business contexts, complaining is seen as rude. Explicitly state that you’re seeking honest feedback, even negative, to truly understand their problems. Alvarez’s Yammer template explicitly sets this tone.
- The Storytelling Demo: This technique, also known as a persona walkthrough, is highly effective for showing new features or concepts to existing customers without making them seem “real.”
- How it works: You narrate a story about a fictional, archetypal user (e.g., “Jessica”) interacting with a prototype. You describe what Jessica does, thinks, and how she uses features, inviting the customer to agree or disagree with Jessica’s behavior.
- Benefits: Discourages customers from fixating on features or edge cases and instead focuses them on workflow and problem-solving. It’s a conservative but effective first step for companies new to this type of customer interaction.
- Tools: Use Invision or Axure to create clickable, consistent prototypes that feel real but are clearly not working code.
Incognito Customer Development
Sometimes, your company’s brand or market position biases customer feedback.
- Taking on a New Identity: For sensitive overhauls (like Microsoft’s Hotmail to Outlook.com transition), researchers removed all Microsoft branding, presenting themselves from a generic “company improving email.” This provides untainted feedback. A different domain name is often sufficient.
- Talking to People Who Aren’t Customers (Proxies): Interview non-customers who have deep domain knowledge (e.g., administrative assistants, early-career product managers, or even your own internal customer support staff).
- Yodlee’s BillPay Accelerator: By interviewing actual online banking users (not just financial executives), Alvarez discovered a common pain point: 88% of users who “switched” banks still paid bills from old accounts due to tedious setup. This deep user insight, not accessible via executive meetings, led to a new, valuable product solving a problem for both consumers and banks.
Learning How Customers Really Use Your Product
For existing products, the goal is to understand how customers extract value and identify opportunities for improvement.
- “Show Me How You’re Using Our Product”: Instead of relying on what customers say they do, observe them directly using your software (in-person or via screen-sharing).
- Purpose: To learn actual frequency of use, what they do after using your software, how your product aligns with their real-world workflow, what features are unused, and new opportunities for value.
- Framing: Frame it as “getting someone up-to-speed” or “watching how people use our software.” Refrain from offering opinions or corrections to ensure natural behavior.
- “Here’s How to Use Our Product” (Didactic Approach): For new customers, products with steep learning curves, or to understand why features are unused, take an opinionated stance.
- Purpose: You explain how you envision someone using the product, explicitly inviting the customer to correct your assumptions.
- Benefits: Identifies restrictions (legal, cultural), clarifies your value proposition, and explains why features are unused. This method can also serve as both research and training.
- Usage Frequency vs. Replaceability: While quantitative analytics measure frequency, qualitative interviews reveal replaceability. A product used daily but easily replaced offers less value than one used weekly but integrated into a critical workflow, as seen with KISSmetrics customers using analytics for A/B tests.
- Later Is Better than Never: Sometimes, early customer input is impossible (e.g., Microsoft Kinect due to secrecy and engineering complexity). In these cases, systematically validate assumptions internally through rigorous testing and prototypes, then bring in customers for feedback on working prototypes as soon as feasible. The Kinect team used internal incubation, rapid sprints, and later, public Macy’s demos to gather diverse motion-capture data and observe real user interaction.
Chapter 9: Ongoing Customer Development – Integrating Learning into Daily Operations
This final chapter focuses on embedding customer development into the everyday routines of established companies. It emphasizes that continuous learning doesn’t require massive, planned efforts but can be achieved through small, consistent interactions and by leveraging existing customer touchpoints.
Harnessing Customer-Facing Coworkers
The most untapped resource for ongoing customer development is your internal customer-facing teams – sales, account managers, and customer support. They are “already out of the building” and have daily, direct interactions with customers.
- Turning Feature Requests into Learning Opportunities: Instead of a “yes” or “no” to a feature request, train sales and account managers to respond with questions. This softens the interaction, prevents losing deals, and extracts critical context about why a feature is desired and how it would solve a problem.
- Linguistic Softening: Teach phrases like “I’d like to make sure I fully understand your needs…” or “Can I ask more about the context?” to make inquiries conversational, not interrogative.
- Repairing Relationships and Demonstrating Value: Offering to join sales calls or account visits can bridge gaps between product and sales teams, demonstrating the value of direct customer listening. Palantir’s practice of sending engineers into the field exemplifies removing the middleman to gain deep, immediate customer understanding.
Leveraging Customer Support Interactions
Customer support is often seen as a cost center, but it’s a “listening post” providing privileged access to customer insights.
- The Unexpected Value of Support Calls: Mindbody’s CEO discovered, while staffing support lines, that customers struggled with basic, frequently used features. This led to Vanessa Pfafflin becoming their first user researcher, identifying that customers often didn’t complain about problems, or blamed themselves, making support logs a rich source of hidden insights.
- Transforming Complaints: Support professionals can use questions to defuse negativity. When a customer complains, asking follow-up questions (“To make sure I understand correctly: you’re saying X? If we built X, what would you be able to do that you aren’t able to today?”) makes the customer feel heard and understood, transforming anger into collaboration. Rachel Pennig of Recurly successfully uses this to clarify needs and offer better solutions.
- Don’t Take Suggestions at Face Value: Customers often suggest features or changes that are symptoms of underlying problems, not the root cause.
- Same request, different problems: 20 customers may ask for “greater administrative controls,” but each means something entirely different.
- Different requests, same problem: 20 customers may suggest different features, but they are all trying to solve the same underlying problem. The support team’s role is to push beyond the suggested solution to the actual pain point.
- Functionality or Design Issues: For interface complaints, use the “4 As” (Apologize, Admit, Ask, Appreciate). This sets a positive tone for asking clarifying questions (e.g., “Do you use this feature around the same time each month? Are you the only person who uses it…?”). This reveals the context and impact of the issue, often identifying 10 silent sufferers for every one vocal complainer.
- Bugs and Errors: After resolving a bug, take a moment to ask: “Can I ask what task you’re trying to accomplish when you use this functionality? I’m asking because I want to make sure I’ve given you the most useful answer possible.” This provides valuable insight into the intent behind the action that led to the bug.
Implementing a “Question of the Week”
For lightweight, ongoing learning, introduce a “Question of the Week” to customer interactions.
- What it is: A single, standard question added to the end of customer support emails or live chat interactions.
- Benefits: Gathers factual, numerical, or who/what/how/when/why data (e.g., “How many hours this past week did you spend in meetings?”). Easy to tally and reveals useful background research or trends.
- Recognizing Bias: Be aware that customers initiating contact (power users, admins, unhappy users) are not a representative sample. Understand the direction of bias (e.g., power users clamoring for niche features) and correct for it when making decisions.
Closing the Loop
Ensuring that customer insights are captured, summarized, and shared is vital for organizational learning.
- Collecting Information: Use lightweight methods that fit existing tools (shared Google Docs/Evernote, dedicated email addresses, simple Google Forms). The easier it is for customer-facing teams to submit notes, the more feedback you’ll get.
- Sharing the Impact: Explicitly share success stories showing how customer development insights led to positive outcomes, saving time/money or increasing satisfaction. Use a “We learned X, so we didn’t build Y, which saved Z time!” or “We learned X, so we tried Y, which resulted in Z positive metrics change!” format.
- Visual Reinforcement: Print summaries as posters or display them on office TVs to make customer learning a visible part of the company culture.
- Regular Updates: Share updates weekly (for startups) or monthly/quarterly (for larger organizations) to keep teams engaged. Yammer shares anonymized summaries with its private customer community, fostering transparency and promoting further questions and challenges. This transforms talking about customer development into another form of practicing customer development.
Key Takeaways: What You Need to Remember
Core Insights from Lean Customer Development
- Every hour spent on customer development saves 5-10+ hours in coding, design, and wasted effort.
- The goal is to invalidate your assumptions about customer wants to build what they will actually buy.
- Customer development works for companies of all sizes, not just startups, and runs in parallel with product development.
- Combat cognitive bias by rigorously writing down and challenging your assumptions; you naturally seek to confirm what you believe.
- Focus on the customer’s problem, not their suggested solution.
- The best predictor of future behavior is current behavior; listen for real-world evidence, not just aspirational statements.
- Emotion in an interview signifies prioritization and reveals what truly matters to the customer.
- Embrace tangents in conversations; recurring tangents can reveal new, unexpected opportunities.
- Don’t pay for interviews as it biases results and attracts less committed participants.
- Skepticism is key; if a customer says “maybe,” write it down as “no.”
- Customer support is a critical “listening post” for ongoing customer development.
- An MVP should maximize learning while minimizing risk and investment, validating core hypotheses before scaling.
Immediate Actions to Take Today
- Identify your core assumptions as a team and write them down.
- Formulate your problem hypothesis in the “I believe [type of people] experience [type of problem] when doing [type of task]” format. Make it as narrow and specific as possible.
- Map your target customer profile using a traits continuum.
- Conduct a practice interview with a colleague to refine your technique.
- Schedule your first few customer interviews, starting with people whose problem is most severe.
- Use the basic interview script (e.g., “Tell me about the last time you…”, “Magic wand question”) and practice the “60-second silence” rule after your first question.
- Take high-fidelity notes, focusing on validation/invalidation, surprises, and emotions.
- Summarize each interview immediately into 5-7 bullet points under “Validates,” “Invalidates,” and “Also Interesting.”
- Share your learnings regularly with your team, focusing on impact and encouraging questions.
Questions for Personal Application
- For your next product idea, what assumptions are you making that you haven’t explicitly written down or tested?
- Who are the “earlyvangelists” for your current problem, and where can you find them?
- When interviewing, are you truly listening for what customers do or just what they say they want?
- What “magic wand” questions could you ask to uncover deeper, unarticulated pain points?
- How can you leverage your customer support team to provide daily, ongoing customer insights?
- What is the absolute “minimum viable” experience you could create to validate your riskiest assumption, and how quickly could you build it?
- Are you clearly communicating to customers that your conversations are exploratory, not commitments?
- How will you ensure that customer development insights translate into tangible product decisions and are celebrated within your organization?





Leave a Reply