Validating Product Ideas: A Comprehensive Summary
Introduction
“Validating Product Ideas Through Lean User Research” by Tomer Sharon is a practical guide for anyone involved in bringing new products or services to life, particularly product managers, startup founders, designers, and researchers. Rooted in the author’s interviews with 200 industry professionals, the book addresses their most pressing questions about users and offers actionable, “lean” user research techniques to answer them quickly and effectively. Sharon argues that too many product ideas fail because they are based on assumptions rather than genuine user needs, advocating for a data-driven approach from conception through launch. This summary will break down the book’s core ideas, chapter by chapter, providing a thorough overview of the lean user research methods presented and their practical application.
What Do People Need?
This chapter emphasizes the critical importance of understanding genuine user needs before investing time and resources into product development. It highlights that many products fail not because they are poorly built, but because they don’t solve a problem people actually care about. The chapter introduces experience sampling as a lean method to uncover these needs.
Answering the Question with Experience Sampling
Experience sampling involves repeatedly prompting research participants to document their experiences in real-time over a set period. This technique captures behavior and context as it happens, providing a rich source of data on needs, frustrations, and delights.
- Defining Scope and Question: The study begins by clearly defining the area of inquiry (e.g., grocery shopping, photography) and phrasing a specific, repeatable question about recent behavior.
- Finding Participants: Recruiting 25-200 participants is generally manageable, balancing enough data with analysis capacity. A screener questionnaire is crucial to ensure participants meet the desired criteria.
- Data Point Determination: The number of data points needed dictates the number of participants, study length, and frequency of questioning. Aiming for 500-1,000 useful answers is a good target.
- Choosing Medium: Select a simple, accessible method for sending questions and collecting responses, such as SMS, email, or a dedicated app, ensuring data is easily collected in one place.
- Planning Analysis: Predefine categories for classifying responses to simplify the process. Analysis involves disassembling qualitative answers into quantifiable components.
- Setting Expectations: Clearly communicate the study’s duration, frequency of questioning, and required effort to participants, emphasizing the value of their contributions and offering incentives.
- Launching and Monitoring: Conduct a pilot test first to refine the study before launching to the full participant group. Continuously monitor responses and provide support as needed.
- Analyzing Data: Classify responses into predefined categories, identifying themes and patterns. Eyeballing the data provides a quick feel for emerging insights.
- Generating Bar Charts: Visualize classified data to show frequency counts of categories and specific issues, providing a numerical story of the findings.
- Identifying Themes: Synthesize the data into themes, which should include a title, description, and potential design implications, answering the core research question about user needs.
Experience sampling, while simple, offers a powerful way to move beyond assumptions and uncover genuine user needs by observing behavior as it unfolds in real-time.
Who Are the Users?
This chapter addresses the fundamental question of identifying and understanding your target audience. It emphasizes that while demographic data can be useful for marketing, understanding user behaviors and motivations is crucial for effective product development. The chapter introduces interviewing and persona development as complementary methods for achieving this understanding.
Answering the Question with Interviewing and Personas
Interviewing, whether in-person or remote, provides a direct dialogue to explore users’ feelings, desires, struggles, and opinions. Personas, when grounded in research, serve as valuable communication tools to share these insights across the team and foster empathy.
- Creating BS Personas: Start by brainstorming assumptions about your target audience with your team, creating “bullshit” (BS) personas based on these educated guesses. This exercise helps focus your research efforts.
- Deciding Who and How to Interview: Consider different types of interviewees (user, limiting user, extreme user, expert) and interview formats (in-person in a quiet room, street intercept, remote). In-person, in-context interviews are highly recommended.
- Writing a One-Page Plan: Document the study’s background, goals, research questions, methodology, participants, and schedule to ensure team alignment and a shared understanding.
- Finding 10 Interviewees: Recruit participants who fit your criteria using a screener questionnaire. Aim for a small, digestible number of interviewees (around 10) for qualitative depth.
- Preparing the Interview: Craft an interview guide that encourages participants to tell stories about recent experiences. Use a variety of question types (sequence, guided tour, specific example, etc.) and prepare follow-up questions to probe deeper.
- Preparing for Data Collection: Plan how to capture interview data, such as through note-taking (ideally by a second team member) and recording. The KJ Technique is recommended for analyzing the qualitative data.
- Establishing Rapport: Build trust and make participants feel comfortable by smiling, making eye contact, listening attentively, and expressing gratitude. Avoid verbal and nonverbal contradictions.
- Obtaining Consent: Ensure participants understand their rights, including confidentiality, voluntary participation, and the ability to withdraw at any time. Informed consent is crucial for ethical research.
- Conducting the Interviews: Focus on listening, observing nonverbal cues, and exploring cultural differences. Avoid pitching your product or asking leading questions.
- Analyzing Collected Data: Use the KJ Technique to collaboratively group and prioritize observations from the interviews, reaching a shared understanding of the findings.
- Transforming BS Personas to Personas: Based on the analyzed interview data, refine your initial BS personas into research-based personas that accurately reflect the behaviors, motivations, and pain points of your actual or potential users.
By combining interviewing with persona development, teams can gain a deep, empathetic understanding of their users, which is essential for creating products that truly resonate.
How Do People Currently Solve a Problem?
This chapter shifts the focus from identifying general needs to understanding how people currently address a specific problem that you might want to solve. It highlights the importance of falling in love with the problem, not just a potential solution, and emphasizes that observing real-world behavior is crucial. The chapter introduces observation as a powerful method for gaining this understanding.
Answering the Question with Observation
Observation involves watching people in their natural environment as they go about their activities. This method provides rich contextual data that can reveal workarounds, habits, and unmet needs that users might not articulate in interviews.
- Finding Eight Research Participants: Recruit a small, manageable number of participants (around eight) who are willing to be observed in their natural environment. Utilize screeners and consider the logistical challenges of location.
- Preparing a Field Guide: Create a guide with research questions and a list of behaviors and occurrences to look for during the observation session. The level of structure in the guide depends on your experience and needs.
- Briefing Observers: If you have a team of observers, brief them on the participant, the session’s flow, and potential challenging situations they might encounter, ensuring everyone is prepared and knows their role.
- Practice!: Conduct a short practice observation session with a colleague to get comfortable with the process, including note-taking, photo/video recording, and being present without being intrusive.
- Gathering Equipment: Collect necessary equipment such as cameras, audio recorders, batteries, chargers, notebooks, pens, and Post-it notes. Prioritize small, non-intimidating devices.
- Establishing Rapport: Build trust with the participant upon arrival by smiling, making eye contact, expressing gratitude, and respecting their space. Dress appropriately for the environment.
- Obtaining Consent: Clearly explain the purpose of the observation, what will happen, how data will be used, and the participant’s rights. Ensure they understand and agree before starting.
- Collecting Data and Paying Attention: Observe and document routines, interactions, interruptions, shortcuts, contexts, habits, rituals, jargon, annoyances, delights, transitions, and artifacts. Maintain an open mind and have conversations, not just interviews.
- Debrief: Conduct quick debriefs immediately after each observation session to capture fresh insights. Daily debriefs with the team using affinity diagramming help organize observations.
- Analyze and Synthesize: Use affinity diagramming or the KJ Technique to group observations and identify key themes. Craft short stories based on these themes to describe potential future scenarios or product ideas.
By immersing yourself in users’ environments through observation, you can gain invaluable insights into the problems they face and how they currently solve them, leading to more impactful solutions.
What Is the User’s Workflow?
This chapter delves into understanding the sequence of steps people take to achieve a specific goal. It highlights the importance of designing products that align with existing user workflows to minimize friction and maximize usability. The chapter introduces the diary study as a method to uncover these workflows, particularly for complex or extended processes.
Answering the Question with a Diary Study
A diary study involves participants documenting their activities, thoughts, and experiences over a period of time. This provides a detailed, real-time record of their workflow, habits, and motivations, which can be difficult to capture through other methods.
- Choose Diary Type and Structure: Decide between a structured diary (event-triggered, time-interval, format-specific, or a combination) or an unstructured diary, depending on the study’s goals and your existing knowledge of the workflow.
- Set up a Data Collection Tool: Select an accessible tool for participants to submit diary entries, such as email, SMS, instant messaging apps, or dedicated diary study platforms. Prioritize ease of use for the participant.
- Carefully Recruit Eight Research Participants: Recruit participants who are expressive and have the self-discipline to consistently document their activities. Use screeners to identify suitable candidates and confirm their availability for the study’s duration and a concluding interview.
- Prepare Instructions and Brief Participants: Provide clear written instructions outlining the study’s goals, dates, incentive, tool usage, and specific diary assignments (if structured). Conduct a briefing to answer questions and set expectations.
- Launch the Pilot and Then the Full Study: Run a pilot test with a small group to ensure instructions are clear, the tool works, and participants understand what is expected. Refine the study based on the pilot before launching to the full participant group.
- Prompt Participants for the Right Data: Monitor diary entries as they are submitted and proactively ask for clarifications or more detail if needed. Encourage the inclusion of photos or videos to enrich the data.
- End with Interviews: Conduct interviews with participants after the diary period to fill in gaps, ask follow-up questions, clarify entries, and gain deeper insights into their motivations and the context of their actions.
- Reframe Diary Data: Systematically analyze the collected diary and interview data by tagging entries to identify commonalities, themes, and relationships related to the user workflow. Tools like Reframer can assist with this process.
- Construct Workflow: Based on the analyzed data, create a numbered list of steps outlining the user’s workflow. Include a name, description, relevant quotes, and supporting photos for each step.
By using a diary study, you can gain a detailed understanding of how users perform complex tasks over time, enabling you to design products that seamlessly integrate into their existing workflows.
Do People Want the Product?
This chapter explores the question of market desirability – whether people, upon learning about your product, will actually want to use or purchase it. It emphasizes that this question is more about marketing and communication than just product design, and introduces lean MVP (Minimum Viable Product) experiments to gather behavioral data on user interest.
Answering the Question with a Concierge MVP and Fake Doors Experiment
These MVP techniques allow you to test user desire without fully building the product. A Concierge MVP simulates the product’s functionality manually, while a Fake Doors experiment gauges interest by offering a non-existent feature or product.
- Choose an Experiment Type: Select a Concierge MVP for exploration and learning when you have no product yet, or a Fake Doors experiment to gauge interest in a specific idea when you have some existing presence.
- Design a Concierge MVP: Identify the core value of your idea and simulate its delivery manually using existing technology (e.g., email, SMS). Ensure the manual service’s quality aligns with the envisioned product to avoid misleading results.
- Find Customers and Pitch Concierge MVP: Identify where your target audience physically or virtually congregates and pitch your service to them. Recruit a manageable number of interested customers and set clear expectations.
- Serve the Concierge MVP to Customers: Deliver the manual service, minimizing unnecessary interaction. Track key events, proactively seek feedback, and eventually ask for payment to gauge perceived value. Iterate the service based on feedback and track progress.
- Design a Fake Doors Experiment: Implement a call to action for a non-existent feature or product on a website, app, or landing page. This could be a button, a link, or even an ad campaign leading to a “coming soon” page. Honesty and apologizing for the missing feature are crucial.
- Determine a Fake Doors Threshold: Before launching, define the specific metric (e.g., click-through rate, conversion rate) and the threshold you need to reach to validate sufficient user interest and proceed with development.
- Make a Decision and Move On: Evaluate the results of your experiment. If your predefined threshold is met or exceeded, it indicates sufficient user interest. If not, recognize that your initial assumption was likely invalidated and consider pivoting your idea or target audience based on what you’ve learned.
By running Concierge MVP or Fake Doors experiments, you can gather valuable behavioral data on whether people truly want your product, reducing the risk of building something nobody needs.
Can People Use the Product?
This chapter focuses on evaluating the usability of your product – whether users can effectively and efficiently accomplish their goals with it. It stresses that usability testing is not a luxury but a critical practice for improving design and preventing user frustration. The chapter introduces online usability testing as an accessible method for assessing product usability.
Answering the Question with Online Usability Testing
Online usability testing allows you to observe real users interacting with your website, web application, or mobile app remotely. This method provides valuable feedback on design strengths and weaknesses without the need for in-person moderation.
- Write a One-Page Plan: Document the study’s background, goals, research questions, methodology, participants, and schedule to ensure clarity and alignment within your team.
- Find 5 or 500 Participants: Recruit a small number of participants (around 5) for qualitative insights into why issues occur, or a larger number (around 500) for quantitative data on what is happening. Use screeners to ensure participants meet your criteria.
- Phrase Instructions, Tasks, and Questions: Craft clear instructions for participants, realistic task scenarios to evaluate specific functionalities, and relevant questions to gather their perceptions and attitudes. Consider using standard usability scales like SUS.
- Pilot-Test!: Always conduct a pilot test with at least one participant to identify and fix any ambiguities in instructions, tasks, or questions, or technical issues with the testing platform.
- Prepare a Rainbow Analysis Spreadsheet: Set up a collaborative spreadsheet to organize and analyze the qualitative data from participant videos. Include sheets for observations, metrics, participant information, raw data, and a summary.
- Launch the Test: Deploy the study to your participants through the chosen online usability testing platform. Monitor the progress of the test and the quality of the data being collected.
- Collaboratively Analyze Results: Watch participant videos as a team, documenting observations in the rainbow spreadsheet. Facilitate discussions to identify patterns, understand the “why” behind user behavior, and reach a shared understanding of the findings.
- Make Changes: Prioritize changes based on their potential impact, the frequency of the issue, and how easily users can learn to overcome it. Don’t be afraid to make significant changes based on the research findings and re-evaluate afterward.
Online usability testing provides a fast and effective way to gather feedback on your product’s usability from real users, enabling you to make informed design decisions and improve the user experience.
Which Design Generates Better Results?
This chapter addresses the question of how to compare different design options and determine which one performs best in achieving specific goals. It highlights that relying solely on intuition or comparing feature lists is insufficient and introduces A/B testing as a robust method for data-driven decision-making.
Answering the Question with A/B Testing
A/B testing involves comparing two or more variations of a page, feature, or product by showing each to a different segment of your user traffic and measuring key metrics, such as conversion rates. This allows you to identify which design is most effective in practice.
- Decide What to Compare: Identify the specific element, page, task, or feature you want to test. Prioritize testing high-risk unknowns where a data-driven decision is most valuable.
- Compare Pages, Tasks, Features, or Elements: Create two or more variations of the item being tested. Ensure the variations are sufficiently different to increase the likelihood of a clear winner. Multivariate testing can be used for testing combinations of elements on a single page.
- Find Research Participants: A/B testing does not require active participant recruitment; it utilizes your existing website or app traffic.
- Evaluate If It’s a Good Time to Test: Run the A/B test when all variations are on equal footing (e.g., fully functional and released). Avoid running tests at times affected by seasonality that could skew results.
- Determine What Would Be an Actionable Result: Define the specific metric you will measure and the threshold you need to reach to consider a variation a “winner” and justify implementing the change. Ensure the results will lead to concrete actions.
- Choose the Tool, Configure the Test, and Launch It: Select an A/B testing tool that fits your needs and platform. Configure the test by setting the metric, traffic percentage, confidence level, and duration or quota.
- Stop the Test: Avoid stopping the test prematurely based on initial results. Run the test for a sufficient duration (at least 7 days, preferably 30) to account for daily and weekly variations in user behavior.
- Understand the Results: Analyze the data provided by your A/B testing tool. Look for statistically significant differences in the measured metric between the variations. Be aware of the concept of confidence intervals and statistical ties.
- Understand “Why,” Not Just “What”: A/B testing tells you what happened, but not why. Supplement A/B testing with qualitative methods, such as observing users interacting with the different variations, to gain a deeper understanding of the reasons behind the results.
- Make a Decision: Based on the statistically significant results, make a decision to implement the winning design or continue experimenting if there is a statistical tie. Track subsequent metrics to confirm the impact of the change.
- Decide What to Test Next: Use the insights gained from the A/B test to inform your next experiment. Continuously iterate and improve your design based on data.
A/B testing is a powerful tool for optimizing existing designs and making data-driven decisions about which variations perform best, leading to continuous product improvement.
How Do People Find Stuff?
This chapter focuses on the crucial aspect of findability within your digital products – the ease with which users can locate the information or functionality they need to complete their tasks. It emphasizes that poor findability can lead to frustration and task failure, making it a critical area for evaluation and improvement. The chapter introduces tree testing, first-click testing, and the lostness metric as methods for assessing findability.
Answering the Question with Tree Testing, First-Click Testing, and Lostness Metric
These research techniques allow you to evaluate different aspects of your product’s information architecture and navigation to understand how users attempt to find what they’re looking for and how successful they are.
- Write a One-Page Plan: Document the study’s background, goals, research questions, methodology, participants, and schedule to ensure a clear focus for your findability evaluation.
- Find 500 Research Participants: Recruit a relatively large number of participants (around 500) for these quantitative findability studies to ensure statistically significant results. Utilize screeners to recruit participants who match your target audience.
- State Product Navigation Assumptions: Define the information structure (the “tree”) or screen designs you will be testing. This could be your current navigation, a new proposed structure, or variations of screen layouts.
- Phrase Instructions, Tasks, and Questions: Craft clear instructions for participants, realistic tasks that require them to find specific items or information within the tested structure or design, and relevant questions to gather their feedback on difficulty and confidence.
- Launch a Tree Testing Study: Configure your tree testing study using an online tool, adding your tree, instructions, tasks, and questions. Run a pilot test first to ensure clarity and functionality before launching to the full participant group.
- Analyze Results and Make a Decision: Review the data provided by the tree testing tool, focusing on metrics like success rates, directness, and time taken. Compare results across different tasks or variations of the information structure to identify areas for improvement. Make changes to your information architecture based on the findings.
- Launch a First-Click Test: Configure your first-click test using an online tool, adding your screen design (mockup, prototype, or live product), instructions, tasks, and questions. This test specifically tracks where users click first to begin a task.
- Analyze First-Click Results: Analyze the data from your first-click test, including clickmaps (showing hot and cold areas), time to first click, and participant responses to confidence and difficulty questions. Compare results across different design variations to determine which one leads to a better starting point for users.
- Track Lostness: During usability testing, track the number of different pages visited, the total number of pages visited (including revisits), and the minimum number of pages required to complete a task to calculate a lostness score. A higher score indicates greater difficulty in finding what’s needed.
- Make Changes and Re-evaluate: Based on the insights gained from tree testing, first-click testing, and lostness metric tracking, make informed changes to your product’s information architecture, navigation, or screen design. Re-evaluate with subsequent studies to validate the impact of these changes on findability.
By utilizing these findability research methods, you can ensure that users can easily and efficiently locate the information and functionality they need within your product, leading to a more positive and successful user experience.
How to Find Participants for Research?
This chapter addresses the common challenge of recruiting participants for user research, highlighting that while it can be a bottleneck, there are numerous accessible and effective methods available. It focuses specifically on leveraging social media platforms for participant recruitment.
How to Find Participants for Research
Finding the right participants is crucial for collecting valid and reliable research data. This section outlines a step-by-step process for utilizing social media to reach your target audience and recruit them for your studies.
- Identify Participant Criteria: With your team, brainstorm and list the key attributes of your ideal research participants. The more specific your criteria, the better you can target your recruitment efforts.
- Transform Criteria into Screening Questions: Convert your participant criteria into measurable benchmarks and then into neutral screening questions that avoid revealing the desired answer. This helps ensure you are recruiting people who genuinely qualify.
- Create a Screening Questionnaire: Compile your screening questions into a user-friendly online form. Include questions to gather contact information and availability, in addition to screening criteria.
- Identify Keywords for Your Audience: Determine the language and jargon your target audience uses when discussing topics related to your product or domain. Use brainstorming, a thesaurus, or tools like Google’s Keyword Planner to identify relevant keywords.
- Find Target Groups and Pages on Facebook: Use your keywords to search for relevant groups and pages on Facebook where your target audience is likely to be active. Prioritize open groups and pages with a large number of members or followers.
- Find Target Hashtags on Twitter: Convert your keywords into relevant hashtags and search for them on Twitter to identify hashtags that are actively used by your target audience.
- Find Target Communities and Pages on Google Plus: Use your keywords to search for relevant communities and pages on Google Plus. Prioritize open communities and pages with a significant number of members or followers.
- Post Screener to Facebook, Twitter, and Google Plus: Share your screener questionnaire with a clear call to action on the identified social media groups, pages, and communities. Include an attractive image to increase visibility and engagement. Consider tagging relevant pages or asking for retweets.
- Track Responses and Select Participants: Monitor responses to your screener as they come in. Filter responses based on your predefined criteria to identify qualifying participants. Select the desired number of participants based on your study’s needs and availability.
By strategically utilizing social media and following these steps, you can effectively reach and recruit research participants who align with your target audience, overcoming a significant hurdle in conducting user research.
Conclusion
“Validating Product Ideas Through Lean User Research” provides a compelling case for incorporating user research into every stage of the product development lifecycle. It challenges the common practice of relying on intuition and assumptions, advocating instead for a systematic, data-driven approach to understanding user needs, behaviors, and preferences. By mastering the lean research techniques presented – including experience sampling, interviewing, observation, diary studies, MVP experiments, usability testing, and findability assessment – product teams can significantly reduce the risk of building products that fail to resonate with their target audience.
- Build, Measure, Learn: The book reinforces the core Lean Startup principle by providing practical methods for the “Learn” phase, enabling teams to make informed decisions and iterate based on real user data.
- Focus on Problems, Not Just Solutions: A key takeaway is the importance of deeply understanding the problems users face before designing solutions. The methods presented, particularly observation and diary studies, are geared towards uncovering these real-world challenges.
- Prioritize Behavioral Data: The book emphasizes gathering behavioral data (what users do) over attitudinal data (what users say they will do or think) through techniques like A/B testing and MVP experiments.
- Start Lean, Iterate Often: Lean user research methods are designed to be quick, cost-effective, and actionable, enabling teams to gather insights early and frequently throughout the development process.
- Team Sport: Many of the recommended techniques emphasize collaborative analysis and discussion, fostering a shared understanding of user insights across disciplines.
- Empathetic Design: By directly engaging with users through methods like interviewing and observation, teams can build empathy, leading to more user-centered and impactful designs.





Leave a Reply