Just Enough Research: Unlocking Design Insight for the Real World

Erika Hall’s “Just Enough Research” is a concise yet powerful guide that demystifies the often intimidating world of design research. Written for anyone involved in building websites and digital products – from designers and developers to project managers and product owners – this book argues that research isn’t a luxury or a solely academic pursuit, but a critical, accessible tool for making better decisions, faster. Hall’s core premise is that in a world of constrained budgets, absurd schedules, and opinion-driven feedback, pragmatic and targeted research is essential to avoid wasting time and resources on solving the wrong problems. This summary will break down every important idea, example, and insight from the book in clear, accessible language, committing to comprehensive coverage of Hall’s invaluable framework for practical, impactful research.

Enough Is Enough

This foundational chapter challenges the notion that research is an expensive, time-consuming endeavor reserved for “eggheads in a lab.” Hall introduces the Segway’s failure as a cautionary tale, illustrating that even brilliant engineering can flop without understanding human context. The Segway, while technologically advanced, failed because it didn’t fit into existing transportation conventions or meet a real, unmet need. This highlights that for humans, context is everything.

Hall asserts that for designers, coders, and writers, admitting “I don’t know” can be terrifying in cultures that mythologize “creative genius” or prioritize speed over understanding. Yet, research is a periscope, offering a clearer view of surroundings. It saves time and effort by determining the right problem, identifying organizational blockers, uncovering competitive advantages, understanding customer motivations, pinpointing high-impact small changes, and revealing personal blind spots. By the end, readers will have “just enough knowledge to be very dangerous indeed,” cultivating a valuable skeptical mindset.

The chapter distinguishes between different types of research:

  • Pure Research: Aims to create new human knowledge (e.g., neuroscience), based on observation or experimentation, published in peer-reviewed journals. This is science, with rigorous standards.
  • Applied Research: Borrows from pure research for a specific real-world goal (e.g., improving hospital care). Ethics remain important, but methods can be more flexible.
  • Design Research: The focus of the book; inquiries integral to design work itself, not about design. It’s largely about understanding “end users” (a dehumanizing but instrumental term). As Jane Fulton Suri of IDEO notes, design research inspires imagination and informs intuition by exposing patterns in behavior, exploring reactions to prototypes, and shedding light on the unknown through iterative hypothesis and experiment.

Hall emphasizes that simply “being human is insufficient for understanding most of our fellows.” Designers must approach familiar people and things as unknown, shedding assumptions. Using the example of the Fantastic Science Center, Hall illustrates how research can help prioritize web improvements, from a new brochure site to mobile apps. Research is not asking people what they like (subjective, empty), not a political tool (don’t let methods be guided by appearance or power struggles), and applied research is not science (avoid arguments about statistical significance; focus on useful insights). This book is for those who aren’t professional researchers, making core concepts accessible and providing practical techniques to cut through laziness, arrogance, and politics. Ultimately, research is critical thinking, fostering a collaborative spirit to avoid pitfalls like relying on focus groups.

The Basics

This chapter delves into the core practices and fundamental ideas for effective research, covering who should conduct it, different types, and roles within the research process. It also addresses common objections to research and strategies to overcome them.

Ideally, everyone on the design team should participate in research. This fosters direct experience, tailors the process to needs, and inspires application of insights. When a team collectively collects insights, they are more likely to apply them, leading to less time explaining rationale and more focus on merit. While someone needs to be the research lead, ensuring protocol and quality, the approach can be collaborative. The most important thing is a shared understanding of the purpose, roles, and process.

Hall stresses that every design project is a series of decisions, and research activities should support specific anticipated decisions. The type of research chosen depends on the purpose (what decisions are in play) and the topic (what you’re asking about).

Research can be classified into different types based on its objective:

  • Generative or Exploratory Research: (“What’s up with…?”) Done before the problem is fully defined, leading to ideas and problem definition. Includes interviews, field observation, and literature reviews. For example, understanding “new parents” to find valuable offerings.
  • Descriptive and Explanatory Research: (“What and how?”) Used when a design problem is identified, to understand context and design for the audience. Activities are similar to generative research, but the high-level question changes from “What is a good problem to solve?” to “What is the best way to solve the problem I’ve identified?” For instance, researching newly diagnosed eye disease patients for educational materials.
  • Evaluative Research: (“Are we getting close?”) Testing potential solutions to ensure they work and meet requirements. This is an ongoing, iterative process, with usability testing being the most common type.
  • Causal Research: (“Why is this happening?”) Understanding cause-and-effect relationships after a solution is implemented. Often involves analytics and multivariate testing (A/B testing), and may extend beyond site performance to external factors.

Research roles are clusters of tasks, often shared by one person. These include Author (plans/writes study), Interviewer/Moderator (interacts with participants), Coordinator/Scheduler (plans time, schedules sessions), Notetaker/Recorder (captures data), Recruiter (screens participants), Analyst (reviews data for patterns), Documenter (reports findings), and Observer (watches in progress). Flexibility in roles and continuous learning are encouraged.

Hall provides a handy list of objections and responses to pushback against research:

  • “We don’t have time”: You don’t have time to be wrong; redoing work takes longer.
  • “We don’t have the expertise/budget”: This book provides tools; online research, interviewing users, and critical thinking cost little.
  • “The CEO will dictate anyway”: Research can sway egos; if not, get a different job.
  • “One methodology is superior (qualitative vs. quantitative)”: The question determines the research type.
  • “You need to be a scientist”: Applied research needs curiosity, depersonalization, communication, and analytical thinking, not a science degree.
  • “You need infrastructure”: A laptop and internet are usually sufficient.
  • “It will take too long”: Upfront research speeds up work by avoiding arguments and solving wrong problems.
  • “You can find everything in beta”: Many crucial insights are needed before design/coding.
  • “We know the issue/users/app/problem inside and out”: Familiarity breeds assumptions; a fresh look is helpful.
  • “Research will change scope”: Better to adjust scope intentionally early than be surprised later.
  • “Research will get in the way of innovation”: Understanding current context enables relevant innovation.

Underlying most objections are laziness and fear. Hall encourages fighting the “individual genius” myth and embracing discomfort in talking to people. She then discusses research in any situation:

  • Freelance: Include research in your fee; it strengthens your design and defense.
  • Client services agency: Improve process with each project; external perspective allows asking “naive” but insightful questions.
  • In-house at an established company: Politics are huge; understand them. Leverage customer service as a data trove (inbound support requests, talking to reps). Understand product and marketing decision-making.
  • In-house at a startup: Easier information sharing; focus on audience clarity and business context. Identify high-risk assumptions.
  • Working with an agile development team: Agile focuses on process, not outcomes. Research provides guiding mandates. Decouple research planning from development. Prioritize high-value users, analyze data quickly/collaboratively, and defer less urgent research. Always be recruiting for continuous research.

Just enough rigor emphasizes that while not a professional researcher, discipline and checklists are key. Hall details covering bias:

  • Design bias: Creeps in when bias isn’t acknowledged or information is included/excluded based on personal goals.
  • Sampling bias: Almost unavoidable in quick qualitative research; be mindful of general conclusions.
  • Interviewer bias: Inserting opinions; practice neutral interviewing.
  • Sponsor bias: Participants being overly gentle due to company’s hospitality; use general descriptions.
  • Social desirability bias: Participants wanting to look good; emphasize honesty and confidentiality.
  • The Hawthorne effect: Behavior changing just because you’re observing.

The ethics of user research are crucial: ensure the overall project is ethical, avoid deceptive methods, secure informed consent, and prioritize participant safety and privacy. Hall advises a skeptical mindset and awareness of one’s own limits.

Finally, how much research is enough? Hall asks to identify highest-priority questions and assumptions carrying the biggest risk. If being wrong about an assumption incurs significant costs (e.g., solving the wrong problem, lacking organizational support, missing competitive advantage, irrelevant features), then research is warranted. The “satisfying click” of pieces falling into place indicates enough research has been done, offering clarity and confidence to move forward.

The Process

This chapter outlines the “systematic” aspect of systematic inquiry, detailing the six essential steps for any research study, whether it spans a month or a single morning. Hall stresses that being methodical, even slightly, saves precious time and mental energy.

The six steps of the research process are:

  1. Define the Problem: A useful research study starts with a clear problem statement, solving for a lack of information. The statement should use an outcome-oriented verb like “describe,” “evaluate,” or “identify” (avoid open-ended “understand” or “explore”). For example, “We will describe how parents of school-age children select and plan weekend activities.”
  2. Select the Approach: The problem statement guides the general research type (e.g., user research, evaluative research). Available resources (time, money, people) determine the specific approach. Hall encourages a quick description of the study, combining the question with the chosen method (e.g., “We will describe how parents of school-age children select and plan weekend activities by conducting telephone interviews and compiling the results”).
  3. Plan and Prepare for the Research: This involves identifying a point person (keeper of the checklist), sketching out an initial plan (time, money, roles, subjects, recruitment, materials). Hall advises embracing the unexpected and being ready to adapt the plan based on new facts. It’s crucial to be clear about how changes might affect the larger project and to consider trade-offs. The research plan should include the problem statement, duration, roles, recruitment strategy, incentives, and tools.
    • Recruiting: Hall calls this a “time-consuming pain in the ass” but essential for quality qualitative research. Good participants represent your target and can articulate their thoughts clearly. The web is a great place to find participants, using a screener (an online survey) to qualify potential subjects and filter out bad matches. Key questions for a screener include: specific behaviors (e.g., people who ride bikes), tool knowledge and access (e.g., familiar with a mobile device for app testing), and domain knowledge (e.g., mechanics for an auto app). Screeners should be short and vague about the study’s exact topic to prevent participants from guessing desired answers. Following up with phone calls for in-person tests can further weed out unsuitable candidates.
  4. Collect the Data: This is the “go time” – conducting interviews, field observations, or usability tests. Data (photos, videos, notes) must be stored on a shared drive quickly, using a consistent naming convention (e.g., “Study-Subject Name-Year-Month-Day”). Hall stresses the importance of checking files and noting initial impressions between sessions. She also discusses materials and tools, emphasizing using what you already have and familiar tools to avoid technical difficulties.
    • Interviewing: The most effective way to understand another person’s perspective. It requires basic social skills, practice, and self-awareness. The interviewer acts as a host and student, aiming for a comfortable interaction yielding needed information.
    • Usability Testing: Conducting a directed interview while a representative user attempts tasks with a prototype or product. Goal: determine usability and uncover resolvable issues. It’s an ongoing, iterative process. It uncovers problems with labeling, structure, mental model, and flow, and reveals how users think. It does not provide vision, predict market success, prioritize tasks, or substitute for QA testing. Hall recommends avoiding “usability labs” in favor of natural environments and starting with “cheap tests” (paper prototypes, sketches) before more expensive ones.
    • Literature Review: When direct user access is limited or for background, turn to documented studies (e.g., Pew Research Center). Use these to inform understanding, validate assumptions, or complement other work. Be mindful of the source’s questions, sample, biases, and date.
  5. Analyze the Data: This is where the patterns emerge. Gather all collected data, review notes, and look for meaningful patterns that answer initial problem statements. Hall emphasizes getting everyone involved in analysis (especially those who participated in data collection and those who will be designing/coding).
    • Structuring an Analysis Session: A fun group activity that takes half a day to a few days. Steps include summarizing goals, describing participants/data gathering, pulling out quotes/observations, grouping related items into themes, summarizing findings, and documenting. Ground rules emphasize focusing on user understanding, respecting structure, differentiating observations from interpretations, and no solutions during analysis.
    • What is the Data? Look for quotes and observations indicating goals, priorities, tasks, motivators, barriers, habits, relationships, tools, and environment.
    • Outliers: Participants whose behaviors don’t match the target user profile should be noted and their data set aside to avoid skewing findings.
  6. Report the Results: The output of analysis is typically a summary report and one or more models (discussed in Chapter 8). The reporting format depends on how decisions will be made (e.g., informal for small teams, more summarized/polite for executives). A brief, well-organized summary (goals, methods, insights, recommendations) is always superior to a lengthy, ignored report.

Hall concludes by stating that research is an iterative process. The only way to design successful systems for imperfect humans is to talk to them in the messy real world.

Organizational Research

This chapter focuses on the often-overlooked but critical aspect of organizational research, which involves understanding what drives a business, how its internal pieces work together, and its capacity for change. Hall emphasizes that design doesn’t happen in a vacuum; it occurs “in the warm, sweaty proximity of people with a lot on their minds.” Budgets, approvals, timing, and resource availability all hinge on successfully navigating an organization.

Hall likens a small startup to an island (clear landscape) and a large corporation to Australia (complex, many dangers). Regardless of size, an organization is a set of individuals and a set of rules (explicit and implicit). Understanding this environment is key to navigating it and creating the best product. Organizational research can even “put an MBA out of work” by providing practical insights traditionally sought from business analysts. It’s similar to user research, but you’re talking to current stakeholders instead of potential customers. The observer effect can be positive, as asking hard questions forces reflection and reveals crucial differences in understanding.

Who are stakeholders? Hall defines them as “those groups without whose support the organization would cease to exist,” and more practically, anyone whose support is essential for your project’s success. This includes:

  • Executives: For overall mission and vision.
  • Managers: For resource allocation and incentives.
  • Subject matter experts: For specialized industry or business knowledge.
  • Staff in various roles: Especially customer service and salespeople, who have direct user knowledge and are often overlooked.
  • Investors and board members: Depending on their level of interest and influence.

Interviewing stakeholders is a “rich source of insights into the collective mind of an organization.” It helps uncover misalignment between documented strategy and daily decision-making, and highlights strategically important issues.

  • What stakeholder interviews are for: They provide a complete perspective by considering issues from different roles, reveal insights at individual and aggregate levels (e.g., marketing director vs. customer service insights). They also help understand organizational structure, project fit, and approval processes.
  • Neutralizing politics: A significant benefit. Understanding opposition helps gain allies. Interviews are an opportunity to “sell people on the value of your work in terms that matter to them.” Paul Ford’s “The Web Is a Customer Service Medium” highlights the “Why wasn’t I consulted” (WWIC) phenomenon: humans need to be consulted.
  • Better requirements gathering: Business requirements are often defined in an ideal state, but interviews reveal real-world problems and priorities. Hall notes that many projects start without clear business requirements, making success metrics ambiguous.
  • Understanding organizational priorities: How important is the project really? This affects team attention and commitment.
  • Tailoring the design process: Understanding typical workdays, decision-making styles (collaborative vs. autocratic) helps adapt your process to existing habits, especially in cross-functional or vendor-involved projects.
  • Getting buy-in from stakeholders: Asking for input upfront is a “peerless prophylactic” against late-stage objections. It educates and empowers.
  • Understanding how your work affects the organization: Your design will impact everyone (executives, customer service, sales, production). Identify who will need to cope with changes and whether resources are available. This clarifies true organizational support.
  • Understanding workflow: How complex work gets done. Your design must fit into existing workflows. Diagramming current and proposed workflows helps track ramifications and ensure the organization changes to accommodate the new design.

Hall advises sharpening your tact by preparing for interviews, researching interviewees (without being creepy), and prioritizing your interview list. She generally recommends individual interviews over group ones, especially in political organizations, to get an accurate picture and assure confidentiality. Group interviews can save time for closely working, equally influential teams. Email interviews are a last resort for remote or unavailable stakeholders.

A basic interview structure includes:

  • Introduction: Introduce yourself, state purpose, explain info usage, get recording permission.
  • Body: Ask open-ended questions (e.g., “Tell me about…”) and follow-up questions (“Tell me more about that”). Allow pauses. Use the guide as a checklist, not a script.
  • Conclusion: Summarize, verify participation level, and allow follow-up.
  • Basic Questions: About roles, duties, typical day, close collaborators, project success definition, concerns, challenges, and expected changes. Also, specific questions for stakeholders who are users of internal systems.

Dealing with a hostile witness involves staying calm, redirecting, and understanding reasons for hostility (e.g., unpreparedness, power move, feeling time-wasted). Never let them take control; if necessary, cut the interview short. Practice with challenging scenarios.

Documenting interviews involves noting attitudes, goals, alignment with project success, influence, communication patterns, needed participation, and harmony/conflict with other information. You’ve interviewed enough when you confidently know all stakeholders, their roles, attitudes, influence, project benefits/suffering, likelihood of project failure from their end, workflow changes, resources available/required, business requirements, and whether goals are truly shared (no hidden agendas).

What to do with stakeholder analysis involves creating a clear statement of business requirements. These must be cohesive, complete, consistent, current, unambiguous, feasible, and concise. The documentation should not contain specific solutions or design requirements. It may have different versions for internal teams versus broader distribution.

Key elements to include in documentation:

  • Problem statement and assumptions: What needs to be solved?
  • Goals: Reconciled concepts of success.
  • Success metrics: Qualitative and quantitative measurements.
  • Completion criteria: How will you know when it’s done?
  • Scope: What’s included and excluded.
  • Risks, concerns, and contingency plans: Acknowledge potential failures and plan around them.
  • Verbatim quotes: Without attribution, to reveal perspectives.
  • Workflow diagrams: Current and proposed, especially for internal projects.

Hall concludes that understanding organizational habits and capabilities is as relevant as user behaviors, making organizational research essential for significant design projects. It can neutralize politics, clarify requirements, and improve the odds of successful change.

User Research

This chapter delves into user research, specifically ethnography, as a means for designers to develop empathy and understand the human world they are defining. Hall emphasizes that this isn’t just about surveying opinions or running focus groups, but about observing and understanding users in their natural cultural context.

The goal of user research is to:

  • Understand the true needs and priorities of your target audience.
  • Understand the context in which users will interact with your design.
  • Replace assumptions with actual insight.
  • Create a mental model of how users see the world.
  • Create design targets (personas) to represent user needs in decision-making.
  • Hear real people’s language to develop the voice of the site/application.

Hall highlights that everything is in context, which includes:

  • Physical environment: Where and how will the product be used (office, sofa, outdoors)? Is the user alone or interrupted?
  • Mental model: The user’s pre-existing internal concept of and associations with the system or situation. Intuitive design matches this model.
  • Habits: How users already solve problems, or existing habits into which a new product can be inserted.
  • Relationships: How the product fits into users’ social networks and interpersonal dynamics.

The chapter strongly asserts that assumptions are insults. Designing for yourself risks alienating intended users, building discrimination into the product, and pushing people away. Every design decision should be well-informed and intentional. The first rule of user research is: never ask anyone what they want. People often give answers they think you want to hear or that reflect how they wish to see themselves. Instead, like Dr. House, you need to “break into their brain” by asking the right questions and observing details. The real question is often veiled; for example, instead of “What’s your greatest weakness?”, ask “Tell me about a situation at work where you had to deal with something unexpected.”

Ethnography is a set of qualitative methods to understand and document the activities and mindsets of a cultural group observed in their ordinary environment. Its fundamental question is “What do people do and why do they do it?” with the rider “…and what are the implications for the success of what I am designing?”

Hall outlines the four Ds of design ethnography:

  • Deep Dive: Get to know a small, sufficient number of representative users very well.
  • Daily Life: Fight the urge for control; get into the “messy and unpredictable” field where users actually live and work. Participant observation is key.
  • Data Analysis: Systematically sift through observations to understand their meaning, moving beyond just meeting interesting people.
  • Drama! Create lively narratives (personas, scenarios) to rally the team around user behavior, keeping design honest.

Interviewing humans is a core skill. It’s not about being a good talker, but about shutting up and actively listening.

  • Preparation: Create an interview guide with a brief study description, demographic questions, icebreakers, and primary focus questions. Gather background on the topic.
  • Interview Structure: Three boxes, loosely joined:
    • Introduction: Introduce yourself, state purpose (without influencing answers), explain data usage, get recording permission, verify demographics.
    • Body: Ask open-ended questions to encourage talking. Use probing questions (“Tell me more about that”). Allow pauses. Use the guide as a checklist, not a script.
    • Conclusion: Transition gently, summarize, verify, and cover administrative topics. Don’t be afraid to end early if unproductive.
  • Conducting the interview: Be a host and student. Put the participant at ease. Be an “invisible, neutral presence.” Practice active listening (mm-hmms, nods). Avoid talking about yourself or giving unsolicited advice.

A handy checklist for effective user research:

  • Create a welcoming atmosphere.
  • Listen more than you speak.
  • Accurately convey thoughts and behaviors.
  • Conduct research in natural contexts.
  • Start with general goals, avoid narrow focus.
  • Encourage sharing and natural behavior.
  • Avoid leading and closed questions; ask follow-ups.
  • Prepare an outline, but be flexible.
  • Snap photos of interesting things.
  • Note exact phrases and vocabulary.
  • Pay attention after recording stops for valuable revelations.

Contextual inquiry takes skills into the field, observing participants in their actual environment as they perform activities. This reveals “janky work-arounds so unconscious and habitual the individual has completely forgotten it.” It’s great for accurate scenarios and understanding environmental impacts. Scott Cook’s “Follow Me Home” practice at Intuit is a prime example. Key aspects include travel, getting situated, establishing trust, observing diligently (and noting everything), and summarizing. Contextual inquiry can be very inspirational, revealing unexpected problems and opportunities.

Finally, Hall issues a strong warning: focus groups: just say no. They are “research theater,” creating an artificial environment that doesn’t yield insights into behavior or context. They are prone to social desirability bias and group dynamics, and one bad recruit can derail the session. While potentially useful for generating ideas, they are antithetical to ethnography.

The chapter concludes by urging designers to accept no substitute for listening to and observing real people. Even a few phone calls can change everything. The information gathered will continually pay dividends, grounding design decisions in real human needs and behaviors, and fostering powerful empathy.

Competitive Research

This chapter addresses a crucial question for any product or service: Who is the competition? Hall clarifies that the competition extends far beyond obvious industry rivals. It includes “everything else anyone has considered or started using that solves the problem you want to solve or helps them avoid it.” This broad definition encompasses Facebook, Apple, Wikipedia, a nosey neighbor, inertia, or even marijuana – anything that competes for a target customer’s attention.

The hardest competitor to beat is the one your potential customers are using right now, as switching costs (even just habits) are high. Customers must “love you more than they hate change.” This chapter follows user research because you need to understand not only who your competitors are from a business perspective, but also who competes for attention in the minds of your target users.

Competitive research should be frequent and quick, constantly asking:

  • “What matters to our customers?” (user question)
  • “How are we better at serving that need than any competitor?” (product question)
  • “How can we show our target customers that our product is the superior choice?” (marketing question)

Hall emphasizes that competitive research often only reveals the visible outside of competitors’ work, so critical thinking and extrapolation are needed to understand their underlying strategies.

The chapter introduces SWOT Analysis (Strengths, Weaknesses, Opportunities, Threats), a framework devised by Albert S. Humphrey. This 2×2 grid helps guide strategy. Internal strengths and weaknesses are understood through organizational research, while external opportunities and threats are identified through competitive research. Knowledge gained from competitive research itself is a competitive advantage. The focus should be on competitive opportunities and threats.

A competitive audit involves compiling a list of competitors (including web search results and those mentioned in user interviews) and assessing their relevant work (websites, apps, kiosks, social groups). For each competitor and touchpoint, answer:

  • How do they explicitly position themselves?
  • Who do they appear to be targeting (and how does it overlap/differ from yours)?
  • What are their key differentiators?
  • To what extent do they embody your positive/negative brand attributes?
  • How do their user needs/wants served overlap/differ from yours?
  • What are they doing particularly well or badly?
  • Where are the emerging conventions, opportunities for superiority, or good practices to adopt?

A brand audit involves a hard look at your own brand. Your brand is your reputation and its signifiers. For many interactive products, the brand experience is the user experience.
Key questions for a brand audit:

  1. Attributes: Which characteristics should people associate with the brand/product, and which should be avoided?
  2. Value proposition: What do you offer that others don’t, and how is this communicated?
  3. Customer perspective: What associations do existing/potential customers have with your brand (from ethnographic interviews)?

The Name is the single most important brand aspect: it must be unique, unambiguous, and easy to spell/say (e.g., mint.com vs. playful macaques like Geezeo). The Logo is the illustrative manifestation of the brand, its importance depending on context (e.g., athletic apparel vs. new web app). Native mobile apps present a challenge due to size constraints. Effective logo assessment involves listing all likely encounter contexts and comparing against competitors.

Usability-testing the competition is a powerful technique. By using task-based usability testing on a competitor’s product, you can directly understand their strengths and weaknesses from the user’s point of view, identify opportunities to develop your advantages, and gain insight into how target users conceptualize core tasks.

Competitive research is about understanding the fastest-moving target in design. A user-eye view of comparative strengths and weaknesses helps focus messaging and hone the product’s image, carving out a niche in time.

Evaluative Research

This chapter explores evaluative research, which is the process of assessing the merit of your design. Hall emphasizes that this is research you should never stop doing. Evaluation happens at various stages of a project:

  • Early stages: Heuristic analysis and usability testing on existing sites, competitor products, or even early sketches.
  • Live products (even alpha): Quantitative data analysis (site analytics) to see actual user interaction.

The best evaluation combines quantitative methods (what’s happening, numbers) and qualitative methods (why it’s happening, individual insights).

Heuristic analysis is the most casual usability evaluation method. “Heuristic” means “based on experience,” referring to qualitative guidelines or accepted usability principles. Jakob Nielsen’s ten heuristics are the most famous, including:

  • System status visibility: Feedback on what’s happening.
  • Match between system and real world: Familiar language and conventions.
  • User control and freedom: Emergency exits, undo/redo.
  • Consistency and standards: Similar things behave similarly.
  • Error prevention: Design to avoid errors, not just recover from them.
  • Recognition rather than recall: Options visible, instructions easy to find.
  • Flexibility and efficiency of use: Shortcuts for experts.
  • Aesthetic and minimalist design: No irrelevant info.
  • Help users recognize and recover from errors: Helpful error messages.
  • Help and documentation: System usable without docs, but help available and task-oriented.

Advantages: quick and cheap, no user recruitment needed, good for obvious issues in early prototypes. Downside: simplistic, may miss issues, less experienced evaluators may not see all problems, focuses on system not user interaction. It’s a sanity check, not a substitute for usability testing.

Usability testing is the “absolute minimum standard” for anything designed for humans. If a design thwarts intended users, it’s a failure. Hall notes that “unusable objects are all around us,” causing “a little more sadness in the world.” She states, “As a designer or a developer, you either care about usability, or you’re a jerk.” Usability is crucial for product success, especially with alternatives available. It leads to better word of mouth and lower support costs.

Nielsen defines usability by five components:

  • Learnability: Ease of basic task accomplishment first time.
  • Efficiency: Speed of task performance after learning.
  • Memorability: Ease of reestablishing proficiency after a period of non-use.
  • Errors: Number, severity, and ease of recovery.
  • Satisfaction: Pleasantness of use.

Usability testing can save you from “unnecessary misery” associated with your brand.

  • What usability testing does: Uncovers significant problems (labeling, structure, mental model, flow), verifies interface language, reveals user thinking about problems, demonstrates approach viability to stakeholders.
  • What usability testing doesn’t do: Provide vision or breakthrough designs, predict market success, prioritize user tasks, or substitute for QA.

Cheap tests first, expensive tests later: Start with paper prototypes, sketches, and internal office tests before moving to field or specific audiences. Test competitors’ products. Test frequently as design decisions are made (e.g., every two weeks with sprints). Avoid testing right before launch; it’s the second most expensive kind (most expensive is users testing after launch via customer service).

Preparing for usability testing involves building practices into workflow, creating a test process and checklist, continually recruiting participants, and assigning a point person.

  • What you will need: A plan, prototype/sketch, 4-8 participants per user type, facilitator, observer, documentation methods, timer.
  • Usability test plans: Centered around tasks. Use personas and their core tasks. Write brief scenarios (e.g., buying tickets for a science center). Prioritize tasks by importance (e.g., buying tickets is critical).
  • Recruiting: Essential for effective tests; participants must share key goals with target users.
  • Facilitating: Requires the right temperament – personable, patient, and able to dispassionately observe failures without intervening or giving hints. Avoid having the designer/developer facilitate their own work initially. Embrace uncomfortable silences. If users blame themselves, ask what they expected.
  • Observing and documenting: A second person should observe and take notes (verbatim quotes, nonverbal frustration, successful/unsuccessful features). Audio recording is fantastic. Video recording can be less valuable due to time/storage, but screen capture with audio is useful. Mobile device testing can be tricky, often requiring creative setups (like MailChimp’s MacBook webcam method). Eye-tracking measures gaze but is expensive and its value debated.

Analyzing and presenting test data: Aim to identify and fix specific significant problems. The outcome is a ranked punch list with rationale.

  • How bad and how often? Rate each problem by severity (high: prevents task; moderate: causes difficulty but task completed; low: minor problem) and frequency (high: 30%+ participants; moderate: 11-29%; low: 10%-).
  • It’ll end in tiers: Sort problems into three tiers based on severity, frequency, and task criticality:
    • Tier 1: High-impact, often prevent task completion; high risk if unresolved.
    • Tier 2: Moderate problems with low frequency OR low problems with moderate frequency.
    • Tier 3: Low-impact, affect few users; low risk if unresolved.

Get to work: Start with Tier 1 issues, identify low-effort fixes, implement, and retest. Watching users struggle is powerful for convincing stakeholders.

Put the competition to the test: Conduct benchmark usability studies on competitors’ sites using a common set of tasks and scoring to identify their strengths/weaknesses and your own advantages.

Analysis and Models

This chapter unpacks the seemingly mysterious process of qualitative analysis, which Hall describes as the “most natural thing possible” for humans, who are “social creatures and pattern-recognition machines.” It’s where design truly begins, turning messy data into organized insights that lead to clarity in concepts, content, navigation, and interactive behaviors. The collaborative nature of this work ensures that deep understanding is shared across the team.

The analysis process is straightforward:

  • Review notes closely.
  • Look for interesting behaviors, emotions, actions, and verbatim quotes.
  • Write observations on sticky notes (coded to the source user for traceability).
  • Group the notes on a whiteboard.
  • Watch patterns emerge.
  • Rearrange notes as patterns are assessed.

The result is a visual representation of research that can be applied to design work.

An Affinity Diagram is the first, and sometimes only, pass at analysis. It helps extract general design mandates from interviews, which can then be prioritized by business goals. The process involves participants building clusters of related observations on a whiteboard. Once a cluster forms, insights and overarching mandates are extracted. This distillation of patterns from individual data points, especially when done collaboratively, multiplies the research’s value. The diagram itself is a handy visual reference and communication tool.

Steps to create an affinity diagram:

  • Write down observations: Direct quotes or objective descriptions of user actions/statements on sticky notes. Pull out interesting quotes and note user goals (stated or implicit) and specific vocabulary.
    • Example observations: “I reset my password every time I visit the website because I never remember it.” “Participant’s four-year-old daughter interrupted three times during the thirty-minute interview.”
    • Example goals: “I like to start the weekend with some good activities in mind.”
  • Create groups: Group related notes on a whiteboard, then name the pattern and identify the user need (e.g., “Needs reminders for organized activities”).
  • Identify next steps: Extract actionable design mandates or principles.
    • Examples: “When announcing a new exhibit, offer the ability to sign up for a reminder.” “Improve promotion of and navigation to activities and lesson plans.”

The affinity diagram helps in decision-making (prioritizing features, identifying additional research needs) and serves as a common reference point for the team.

Creating Personas involves building a fictional user archetype—a composite model from real user data—that represents a group of needs and behaviors. Personas advocate for user needs in product development, balancing business, marketing, and engineering interests.

  • Good personas are the most useful and durable outcome of user research. They are derived from collaborative effort and firsthand user research.
  • Design targets are not marketing targets. Personas are based on behavior patterns and priorities, not market segments.
  • How many personas? As few as possible, representing all relevant behavior patterns. Can reduce numbers by creating relationships or assigning multiple roles.
  • A useful persona is vivid (a few key details) and integrated into the workspace.

Key details for a persona description (place mat layout):

  • Photo: Real, relatable person (not stock photo or known to team).
  • Name: Memorable, fits demographics.
  • Demographics: Realistic, representative (age, gender, ethnicity, education, job, marital status, location) derived from interviews or online profiles.
  • Quote: Actual quote embodying a core belief/attitude relevant to needs.
  • Goals: 3-4 key goals the product/website will serve.
  • Behaviors and habits: Specific, habitual actions defining the persona’s pattern.
  • Skills: Level of technical expertise and experience.
  • Environment: Physical context, hardware, software, internet access.
  • Relationships: Influential people (partner, children, coworkers) affecting product interaction.
  • Scenarios: Stories of how a persona interacts with the system to meet goals. They flesh out requirements, explore/validate solutions, and serve as usability test scripts. Scenarios are user-centric, not system-centric.

Personas help the team “stay on target” by becoming the first people to consider for new ideas, ensuring design addresses user concerns, not just personal preferences or boss’s demands.

Mental Models are an individual’s pre-existing internal concept of how something functions and is organized. “Intuitive” design matches the user’s mental model, making it easier to learn and use. Designers can use data to diagram the composite mental model of user types.

  • How to create a mental model: Do user research, make an affinity diagram, place affinity clusters into stacks representing user’s cognitive space (actions, beliefs, feelings), and group stacks around tasks/goals.
  • Building on the towers:
    • Conceptual modeling/site mapping: Translate mental model into a conceptual map for content and functionality relationships.
    • Gap analysis: Identify mismatches between what you offer and what users need/expect, revealing opportunities for new features or showing where planned features don’t fit.

Task Analysis/Workflow is breaking a particular task into discrete steps. Best preceded by contextual inquiry or detailed user interviews. It maps real-world actions to online functionality.

  • Break it down: Identify each step a user takes, initial state, prompting event, needed information/tools, and interruption points.
  • Make it flow: Reassemble steps into a workflow to inform feature set and content support. It also helps identify unexpected user paths or environmental influences.

Model Management acknowledges that these models are just a sample of ways to work with research data. Communicating the meaning and value of research is a design activity in itself. Collaborative synthesis ensures shared understanding, and clear, economical diagrams are viscerally appealing and can promote research value among skeptics.

Quantitative Research

This chapter delves into quantitative research as the primary method for optimizing designs once they are live and generating data. Hall immediately challenges the notion of “optimal” as inherently subjective, stressing that designers must always make trade-offs and understand what they are optimizing for.

Hall acknowledges that qualitative research provides deep insights into human behavior and decision-making, leading to sensible and elegant systems. However, once a product is launched and users arrive in significant numbers, quantitative data provides objective feedback on performance. All user interactions can be measured, turning individual quirks into patterns among “faceless masses.”

The concept of conversion is central: a user takes a measurable action defined as a goal (e.g., “sign up,” “buy now,” “make a reservation”). Many websites are optimized for simple conversion, but Hall notes that most have several types, requiring a business decision about which are most important.

Ease into analytics: Hall encourages designers to embrace analytics, which involves the collection and analysis of data on actual website/application usage. This direct feedback can be addictive and useful in arguments with decision-makers who love data. Google Analytics is a free, excellent starting point. Basic stats include:

  • Total number of visits.
  • Total number of pageviews.
  • Average number of pages per visit.
  • Bounce rate: Percentage of users who leave after viewing one page (lower is generally better, but too low might mean poor discovery).
  • Average time on site.
  • Percentage of new visitors.

While some metrics (like total visits) aim to go up, others (like bounce rate) depend on interpretation relative to audience and business goals. Analytics can reveal where traffic comes from and allow “In-Page Analytics” to see user clicks and scrolls. Hall suggests defining quantitative goals based on industry averages and prioritizing changes based on data. A high bounce rate, for instance, often indicates unmet expectations.

Lickety Split (A/B Testing): When there’s a debate over the most effective change for a quantifiable goal (e.g., increasing newsletter sign-ups), split testing (also known as A/B testing, multivariate testing, etc.) provides a “clinical trial.” Some visitors see the current design (control), others see a variation. The winner, performing significantly better for a specific metric, is adopted. This seems to offer “mathematical perfection.”

The split testing process involves:

  • Selecting a specific, quantifiable goal.
  • Creating variations (e.g., button wording, size, color, placement; copy; price; image type).
  • Choosing an appropriate start date.
  • Running the experiment until a 95% confidence level is reached, which may take days to weeks depending on traffic. This helps rule out chance occurrences or outliers (like a New York Times mention).
  • Reviewing the data.
  • Deciding what to do next.

Hall notes that the winner is often counterintuitive, providing an opportunity to learn. Patterns from multiple tests can inform design work for conversion goals.

Cautions and considerations for split testing:

  • It’s a seductive process that seems to promise automation and certitude, but human decision-making and interpretation are still necessary.
  • It affects the live site, so experiments must be designed carefully to avoid disrupting what’s working well. Inconsistency can erode trust.
  • It’s an incremental process (tweaking, knob-twiddling), not a source of high-level strategic guidance. Best for aspects where users expect variation (e.g., landing pages) and where a single clear user behavior is desired. Not ideal for global navigation.
  • Focusing solely on small positive changes can lead to a culture of incrementalism and risk aversion. Andrew Chen’s concept of the local maximum illustrates that over-optimizing an existing system can prevent “great leaps” to vastly greater heights (Fig 9.1 in the book).

Hall concludes by stating that designers and data junkies can be friends. While data rules the roost in many companies, designers can feel frustrated. The best teams are “Spock-like,” embracing data while encouraging ambitious thinking and looking beyond what’s measurable to what’s valued. Optimizing everything can still lead to failure if you’re optimizing for the wrong things. By asking “Why?” before “How?”, qualitative approaches provide the context for truly valuable innovation beyond the current “best.” Even math has its limits.

Conclusion

Erika Hall concludes “Just Enough Research” by stating that if the book has raised more questions than it answered, that’s fantastic. She wants readers to be excited about asking questions, as questions are “more powerful than answers” and require more courage than clinging to comfortable assumptions.

Hall emphasizes that joyful, needs-meeting products are the result of someone asking hard questions: “Why should this exist? Who benefits? How can we make this better?” Designers deserve to put their effort and craft into work that has real meaning. This means always inquiring into the real-world context surrounding your work. When “blue-sky thinking meets reality, reality always wins.”

Hall advocates for cultivating a desire to be proven wrong quickly and at the lowest cost. In cultures that prize “failing fast,” the fastest way to fail is by testing an idea still on the drawing board, or even better, by checking your assumptions before you even start drawing.

The right questions will:

  • Keep you honest.
  • Improve team communication.
  • Prevent wasted time and money.
  • Be your competitive advantage, guiding you to workable solutions for real problems.

Hall summarizes the process as: Form questions. Gather data. Analyze. This single sequence offers many approaches. She hopes the techniques outlined in the book help readers get started immediately and develop a research habit in any work context. Research is not a burden or luxury, but a means to develop useful insights within your existing process.

The final answer to “How much research is just enough?” is simple: “You’ll need to do just enough to find out.”

Key Takeaways

  • Research is a decision-making tool, not a luxury: It prevents wasted time and resources by helping identify and solve the right problems for real people.
  • Context is everything: Understanding the user’s physical environment, mental models, habits, and relationships is crucial for designing relevant solutions.
  • Don’t ask what users “want”: People are poor reporters of their own desires and behaviors. Instead, observe them, and ask open-ended questions that reveal their underlying motivations and challenges.
  • Organizational understanding is vital: Beyond external users, understand your client’s or company’s internal politics, workflows, priorities, and resources to ensure project success and gain stakeholder buy-in.
  • Embrace continuous, iterative research: From generative exploration to evaluative testing and quantitative analysis, research should inform every stage of design and development.
  • Bias is always present: Be aware of design, sampling, interviewer, sponsor, social desirability, and Hawthorne effects, and strive for rigor and ethical conduct.
  • Qualitative and quantitative data complement each other: Qualitative methods (interviews, ethnography) tell you why things happen, while quantitative methods (analytics, A/B testing) tell you what is happening and how much.

Next Actions

  • Identify your highest-risk assumptions: What bad thing will happen if you’re wrong about a core assumption? This helps prioritize your research efforts.
  • Start small: Pick one key question and try a low-cost research method like quick stakeholder interviews or a simple heuristic analysis.
  • Involve your team: Ensure everyone on the design/development team participates in data collection and analysis to foster shared understanding and empathy.
  • Document findings clearly: Create concise, accessible reports or visual models (like affinity diagrams or personas) that can easily inform design decisions.

Reflection Prompts

  • What assumptions am I currently making about my users or my organization that, if wrong, could severely impact my project?
  • How can I integrate “just enough research” into my existing workflow or team’s processes without causing significant disruption?
  • What is one specific, quantifiable goal for my current project that could benefit from a split test or analytics review, and what qualitative insights might explain the results?
HowToes Avatar

Published by

Leave a Reply

Recent posts

View all posts →

Discover more from HowToes

Subscribe now to keep reading and get access to the full archive.

Continue reading

Join thousands of product leaders and innovators.

Build products users rave about. Receive concise summaries and actionable insights distilled from 200+ top books on product development, innovation, and leadership.

No thanks, I'll keep reading