UX For AI: Complete Summary of Greg Nudelman’s Framework for Designing AI-Driven Products

Introduction: What This Book Is About

In “UX for AI: A Framework for Designing AI-Driven Products,” Greg Nudelman, with contributing editor Daria Kempka, presents a practical guide for UX professionals navigating the rapidly evolving landscape of artificial intelligence. Nudelman, a veteran of 35 AI projects and a Distinguished Designer/UX Architect at Sumo Logic, offers a comprehensive framework for designing AI-driven products that deliver tangible value and avoid common pitfalls. The book emphasizes that AI design is not merely a technical challenge but fundamentally a UX design problem, focusing on the critical intersection of human-AI interaction.

This book is essential for designers, product managers, data scientists, and engineers who seek to build successful AI products. It provides actionable techniques and principles for framing problems, designing intelligent interfaces, conducting AI-inclusive research, and addressing the ethical implications of AI. Readers will learn to leverage core UX skills—such as user research, prototyping, and stakeholder alignment—to lead AI projects effectively, ensuring that technology serves human needs.

By offering a blend of case studies, design exercises, and expert perspectives, Nudelman empowers UX professionals to become “ambassadors of innovation” in the AI era. This summary will provide comprehensive coverage of all key insights, methodologies, and practical advice presented in the book, equipping readers with the knowledge to drive impactful AI product development.

How to Use This Book: Maximizing Your Learning and Impact

Nudelman designed “UX for AI” to be both a comprehensive guide and a quick-reference tool, optimizing it for deep understanding and immediate application. The material is structured to allow readers to gain core skills efficiently, whether they have a few hours or can dedicate time to full engagement.

Strategic Reading for Maximum Value

The book’s content is optimized for understanding, with key concepts supported by hands-on design exercises. While reading the entire book in order is recommended for a holistic understanding, readers with limited time can prioritize specific chapters to gain foundational knowledge quickly. For those with only a couple of hours, focusing on Chapters 3, 4, 5, 15, 17, 19, 20, and 21 will provide the essential skills needed to immediately add value to most AI projects and effectively communicate with data scientists and AI engineers.

Importance of Hands-On Exercises

Doing the exercises is an essential part of the book, designed to help readers translate theoretical understanding into working knowledge. Nudelman strongly recommends applying the ideas directly to a personal AI-driven design project. If a reader does not have an active project, the book provides the “Life Clock” use case as an example for practicing various techniques and concepts. These exercises are crucial for coalescing understanding and sharpening insights.

Leveraging Low-Fidelity Tools

Nudelman encourages readers to draw in pencil on sticky notes for all design exercises and prototypes. This low-fidelity approach emphasizes that drawings are working prototypes, not art pieces, and are subject to immediate and frequent change. This method supports flexibility and prevents designers from becoming overly attached to early designs, which is critical in the rapidly evolving world of functional AI where constant iteration is key.


Related top book summaries:

View all 200+ book summaries


PART 1: Framing the Problem – Laying the Foundation for AI Success

Part 1 of “UX for AI” underscores the critical importance of correctly understanding and framing the problem that AI is intended to solve. Many products fail not due to technological shortcomings but because teams address the wrong problem. AI is a powerful tool, and the temptation to apply it broadly can lead to misdirected efforts. This section introduces foundational concepts—such as storyboarding, digital twin modeling, and the Value Matrix—to help teams select the right use case and define it effectively.

Chapter 1: Case Study: How to Completely F*ck Up Your AI Project – Learning from Failure

Nudelman opens the book with a stark case study of a failed AI project involving an acid gas removal unit (AGRU), likened to boiling spaghetti. This project, which involved a team of seven data science/dev/UX professionals, failed entirely due to several common pitfalls, providing a powerful framework for learning.

The Boiling Pot of Spaghetti: A Multitude of Failure Points

The AGRU project aimed to use AI to predict when the “industrial pasta pot” would boil over, with the goal of replacing human technicians. This project epitomized several critical failure principles:

  • Fail #1: Trying to Replace a Trained Expert with AI: The project sought to replace experienced, though not highly paid, expert technicians with an AI solution. Nudelman states that presuming AI will tell experts how to do their job is a red flag, and if an AI solution costs more than the installed expert, teams should “run.”
  • Fail #2: Forgetting About Cost vs. Benefit: The team failed to quantify the cost/benefit analysis. The cost of a single overboiling event was significantly higher than a technician’s yearly salary, requiring the AI to be ridiculously accurate to justify its price. If the potential cost of a wrong AI guess far exceeds the benefit of a correct AI guess, walk away.
  • Fail #3: No ML Training Data? No Problem! (Actually, a Big Problem): Each “pot” installation was bespoke, making generalized machine learning (ML) data collection impossible. This meant each pot required a custom AI system, which was economically unfeasible. If you do not have the data to train your AI/ML or have no easy, cheap way to obtain the data, walk away; if your solution requires a custom AI model for each installation, run.
  • Fail #4: It Makes No Difference What Question Your AI Model Is Answering (It Absolutely Does): The AI model was built to answer a question (“how long until the next boil-over?”) that was convenient for the model but not directly related to the human operator’s goal of increasing profit by maximizing temperature. Nudelman likens this to looking for keys under a streetlight because that’s where the light is, not where they were lost. If your AI model is trying to answer a data science question instead of one directly related to maximizing profits, walk away; if your team insists on looking under a streetlight, run.
  • Fail #5: Don’t Worry About User Research—You Have an SME! (A Fatal Flaw): The team relied solely on a subject-matter expert (SME) who omitted crucial information (a visual “boiling surface window”) because it couldn’t be easily instrumented with sensors. This meant the AI had no chance to solve the problem. If you do not have a well-run research program that connects directly with customers, walk away; if you cannot conduct even a single in-person, on-site interview, run.

Final Thoughts on AI Project Failure

The failure stemmed from a combination of attempting to replace experts, ignoring cost/benefit, lacking training data, misaligned problem framing, and neglecting user research. Nudelman emphasizes that hindsight is 20/20 and learning from these mistakes is crucial for future success. He advises teams to keep these five principles top of mind to avoid common pitfalls in AI-driven projects.

Chapter 2: The Importance of Picking the Right Use Case – The Foundation of Success

Building on the previous chapter’s lessons, Nudelman highlights that picking the wrong use case is often the first harbinger of doom for any AI project. This chapter provides a detailed guide for selecting the most impactful and viable use cases for AI-driven initiatives.

Presuming AI Will Be Telling Experts How to Do Their Job Is a Red Flag

Nudelman shares a case study from a precision irrigation company that failed to sell its AI solution to third-generation farmers. The AI aimed to tell farmers when and how much to water their crops, a task the farmers already mastered through traditional “kick tests.” The idea that a third-generation farmer would rely on AI to tell them that their crops were sufficiently watered was downright insulting. Direct competition between a human expert and a machine is a recipe for failure, as it immediately triggers human suspicion, pride, and prejudice. Nudelman’s simple advice is to look for another use case when this dynamic is present.

Ask a Better Question to Uncover True Needs

Instead of forcing a solution, Nudelman asked the farmers what truly kept them up at night. He discovered their real concerns were dwindling fresh water supplies, stringent government regulations, and climate change turning their land into desert. This shifted the use case from “ensuring crops are watered” to “making recommendations about which parts of the field need less water to save water without compromising yield.” This subtle but critical difference transformed the AI’s role from a replacement to a valuable assistant. Good UXers make money by peddling their ignorance—they ask good questions. Existing UX methods like contextual inquiry and user interviews are crucial for systematically determining customer needs and identifying profitable AI use cases.

Promising AI Use Cases in Global Healthcare

Thomas Wilson highlights 10 powerful AI-driven innovations in healthcare, including generative AI in drug discovery, personalized medicine, AutoML for health platforms, explainable AI for diagnostics, NLP in telemedicine, and AI for employee enablement. These use cases offer immense ROI and patient benefits but also carry potential for harm and misuse, emphasizing the need for UX professionals to conduct formative research with patients, doctors, and nurses to design safeguards and mechanisms for human intervention.

Selecting the Right AI/ML Use Case

David Andrzejewski emphasizes that users want outcomes, not just AI. He suggests “the best AI is no AI” if a simpler, deterministic solution exists. If AI/ML is necessary, “boring AI” (simpler methods like decision trees) should be considered as baselines. He advises mapping problems to well-understood AI/ML problem settings and using best practices for model performance. For UX, a resilient core functionality should tolerate poor AI results, and designers should help the user help the AI (e.g., through faceted search). He stresses wrapping AI/ML in business logic for guardrails, and implementing calibration and explainability to enhance user trust and understanding. Finally, he notes that human psychology (like algorithm aversion) must be considered, suggesting allowing users to modify AI output or demonstrating AI improvement to mitigate negative perceptions.

Chapter 3: Storyboarding for AI Projects – Visualizing the AI Journey

Nudelman introduces storyboarding as an indispensable tool for AI projects, emphasizing its ability to effectively tell a story, ensure viability, and communicate the nascent project’s UX vision to users and stakeholders. Storyboards, from ancient Egypt to modern comics, demonstrate that “Words + Picture = Greater Impact.”

Why Bother with a Storyboard? Uncovering Gaps and Inconsistencies

Storyboards are crucial for analyzing AI-driven use cases because they make any gaps, inconsistencies, or nonsense stand out much more vividly than a simple written statement. Nudelman illustrates this with a “Mental Health Assistant” app example. An initial use case (“AI Therapist in Your Pocket”) appears viable in text but reveals its impracticality and potential risks when visualized. A revised use case (“AI Helping Hand for Mild Social Anxiety”) immediately appears more realistic and customer-resonant. Many AI projects fail to frame the problem correctly, leading to a lack of demand; storyboards can help solve this before significant investment.

How to Create a Storyboard: Components and Techniques

Creating a storyboard for AI projects should be simple yet sophisticated. It involves six components:

  1. Establishing Shot: The opening panel that sets the scene and immerses the protagonist and reader in the environment. Spending time on this first panel helps in visualizing the entire story.
  2. Things: Inanimate objects that are easy to draw (e.g., boxes and circles can form computers or phones).
  3. People: Stick-figure people are highly recommended for their ease, speed, and ability to allow readers to mentally place themselves in the action.
  4. Faces: Adding eyebrows to simple faces significantly helps communicate nuanced feelings.
  5. Transitions:
    • Action-to-Action: Shows the same subject in a series of actions, typically the default.
    • Subject-to-Subject: A series of changing subjects within a scene.
    • Scene-to-Scene: Transitions across significant distances in time and/or space, the only legitimate use of captions in UX storyboards.
    • Subject-to-AI: A special transition for AI projects where AI acts as another “subject” in the story, allowing visualization of human-AI interaction.
  6. Conclusion: The most important panel, where the projected benefit of the AI solution is revealed. The “payoff” must be realistic and fit the story.

Storyboarding for AI: Focus on “What” and “Why,” Omit Interface Detail

The most significant change in drawing AI storyboards is the increased focus on the “what” and “why” of the story, with intentional omission of much interface detail. This approach maintains team creativity and avoids being too prescriptive early on. Nudelman exemplifies this with an “Answer Phone While Driving” scenario. A “current UX” storyboard shows the danger of interacting with a watch while driving. An “AI-first UX” storyboard reimagines this with natural language interaction, where AI handles the task without visual distraction. The revised story is brief, leaves some details to the imagination, and provides a tangible solution. Use abstract representations whenever possible, but ensure the story hangs together well.

Final Thoughts on Storyboarding

Anyone can tell a great story about an AI-driven product, even if they are not a great artist. The key is to balance abstractions and realism, showing minimal AI-driven interface. Drawing is an exercise by humans and for humans, crucial for freeing imagination and making sense of problems. It connects designers to the universe and should never be discarded. Nudelman encourages having fun with storyboards, as it allows designers to tap into their inner child while adding value.

Design Exercise: Create Your Own Storyboard

Readers are tasked with creating a 4–6 panel storyboard for their own mobile or wearable AI app, using the “Life Clock” concept (predicting lifespan and offering health advice) as an example. The exercise emphasizes including Subject-to-AI transitions and concluding with a “Natural Bang” that conveys the app’s benefit.

Chapter 4: Digital Twin—Digital Representation of the Physical Components of Your System – Modeling for Understanding

Nudelman introduces the digital twin as an excellent model for the UX design of AI-driven products, highlighting its incredible potential at the intersection of UX and AI. A digital twin is a model of the real world that enables the analysis of metrics and outcomes crucial to a physical system.

Digital Twin of a Wind Turbine Motor: A Practical Example

Nudelman uses the example of a GE Haliade 150 offshore wind turbine’s yaw motor to illustrate a simple digital twin. This motor’s digital twin tracks input current and temperature to predict remaining motor life. This simple model is sufficient because the yaw motor is cheap, reliable, and has triple redundancy. The GE Wind Turbine Management Software (GE-WTMS) leverages this digital twin to present four detailed screens: Parts View, input current, temperature, and remaining lifetime. The value of creating a digital twin lies in figuring out what is essential and not essential to include in the model and defining the use cases the model will deliver.

The Digital Twin Is an Essential Modeling Exercise for Designing AI-Driven Products

A digital twin serves as a crucial exercise in understanding and modeling, akin to creating a persona for AI systems. It helps identify “knobs to rotate and buttons to push” for operational control. Nudelman emphasizes the “four blind men and an elephant” analogy, stating that digital twin creation is best undertaken as a cross-functional team (Product Manager, UX Designer, Developer Lead, and Data Scientist) to discover all relevant aspects and uncover important AI use cases. This collaborative discussion is where the actual value of a digital twin model is delivered.

How to Build a Digital Twin: An Example of Iteration

The core process for building a digital twin involves:

  1. Understanding information collected by sensors.
  2. Visually representing relevant physical world aspects.
  3. Labeling the picture with incoming data.
  4. Identifying use cases and valuable predicted measurements.
  5. Noting missing data and discussing how to obtain it (breaking data silos).
  6. Watching out for “creepy” data conclusions (e.g., insurance rates) and ethical implications.
  7. Iterating continuously.

Nudelman demonstrates this with a smartwatch exercise tracker example, showing how the digital twin evolves from a basic model (pulse, age to predict fitness) to increasingly complex iterations incorporating smartphone GPS data (terrain, elevation, weight to calculate calories burned, comparative fitness), and finally, sleep tracking. He stresses that whether to collect and model certain data depends on what is being predicted and what users will allow, highlighting the importance of legal and ethical considerations.

Final Thoughts on Digital Twins

Digital twin modeling facilitates high-quality discussions about measurements, predictions, controls, and use cases. It should be drawn with inputs on the left (time series, JSON, pictures), outputs on the right (predicted variables), and control “knobs” on the system model in the middle. Teams must discuss data sources and ethical implications.

Design Exercise: Create Your Own Digital Twin

Readers are prompted to create a digital twin model for their own use case, following a simple set of steps: drawing the system model, labeling inputs and outputs, identifying self-reported or external data needs, and considering ethical implications and potential misuse. The “Life Clock” digital twin example is provided, illustrating how image analysis (food photos) combined with personal, cardio, weight, and exercise data can predict immediate and long-term health impacts, including lifespan.

Chapter 5: Value Matrix—AI Accuracy Is Bullshit. Here’s What UX Must Do About It – Optimizing AI for Real-World Outcomes

Nudelman reveals what he calls “the big secret”: AI accuracy, as measured by data science metrics, is meaningless in the real world. This chapter provides UX designers with a practical, business-centric alternative: the Value Matrix, which helps optimize AI solutions to “think” in terms of human values and real-world costs and benefits.

The Big Secret: Data Science Metrics Are Bullshit

Nudelman asserts that traditional data science metrics like accuracy, precision, and recall (often used in competitions like Kaggle) mean little for real-world AI applications. He introduces a fictional car manufacturer, “Pascal Motors,” which uses AI to predict car maintenance needs. Three AI models (Conservative, Balanced, Aggressive) show varying data science metrics. While most would intuitively choose the “Conservative AI” for its high accuracy, Nudelman demonstrates that optimizing for revenue by considering real-world cost and benefit makes the “Balanced AI” the best choice, yielding over 158% more revenue. The key takeaway: AI optimized on data science metrics alone will almost always underperform AI that considers the costs and benefits of real-world outcomes.

Confusion Matrix: Understanding AI’s Predictions

To explain why accurate AI can be wrong, Nudelman introduces the Confusion Matrix. This simple table compares model predictions against actual outcomes, categorizing them into four possibilities:

  • True Negative (TN): No problem, AI doesn’t alert.
  • False Negative (FN): Problem exists, AI doesn’t alert (a miss).
  • True Positive (TP): Problem exists, AI alerts correctly.
  • False Positive (FP): No problem, AI alerts incorrectly.
    Accuracy is calculated as (True Positives + True Negatives) / Total Predictions. Nudelman shows how Pascal Motors’ “Conservative (Accurate) AI” with 88% accuracy is “less than useless” because it missed 11 out of 20 actual problems, failing to serve the business goal. AI trained on accuracy is often too timid, avoiding false positives even at the cost of missing true problems. Conversely, AI trained on recall (like the “Aggressive AI”) can be too aggressive, generating too many alerts.

Value Matrix: The AI Tool for the Real World

The Value Matrix, developed by Arijit Sengupta, is a crucial tweak to the Confusion Matrix. It involves recording the monetary value (cost or benefit) of each of the four outcomes (TP, TN, FP, FN) and multiplying it by the outcome count to determine the overall AI model ROI. For example, a successful preventive repair (TP) might save $1,000, while an unnecessary investigation (FP) might cost $100. This tool allows teams to evaluate the real-world outcomes of deploying different AI models and reveals that different value assumptions can lead to vastly different optimal AI choices.

Training AI on Real-Life Outcomes to “Think” Like a Human

Nudelman emphasizes that a different value assumption produces a very different Value Matrix. If the cost of a false positive is high (e.g., $800), a Conservative AI (high accuracy) might be best. If the value of a true positive is very high (e.g., $10,000), an Aggressive AI (high recall) might be optimal. He illustrates that AI models trained on opposite goals can even produce negative ROI if the cost/benefit values are misaligned. The core idea is that instead of AI asking “Which event is most likely to be a problem?”, it should ask a business question: “How do I maximize revenue?” This requires UX research and analysis to help AI think in “human” terms.

TSA AI Example: The Importance of Human Cost/Benefit

Nudelman provides a compelling example of TSA AI predicting terrorists. A model with 99.9999999999999999% accuracy (due to the rarity of terrorists) would be useless. However, if the AI considered the $1 trillion impact of an attack versus the $1 cost of a secondary inspection, it would prompt virtually endless inspections. This leads to the crucial point: TSA doesn’t conduct secondary inspections for every traveler because the human costs would be too high (time, congestion, industry impact). AI is too important to leave to data scientists alone; every real-world AI solution must be tempered by a deep understanding of both business and human impact. UX professionals are essential for this.

Design Exercise: Create Your Own Value Matrix

Readers are tasked with creating a Value Matrix for their own use case by defining the benefit/cost of true positives, true negatives, false positives, and false negatives. They must also determine which AI model type (Conservative/Accurate or Aggressive/High Recall) is needed and reflect on the human costs embedded in business ROI, potential inconvenience, impact on customer loyalty, and ethical implications of AI decisions. The “Life Clock” Value Matrix example analyzes the cost/benefit of food image recognition, highlighting that initial accuracy may be low (33%) but rapid improvement for frequently entered foods is critical for long-term user trust.

PART 2: AI Design Patterns – Crafting Effective AI Interfaces

Part 2 of “UX for AI” shifts focus to modern UI approaches and emerging design patterns for creating effective interfaces for AI-driven products. It begins with a deep dive into successful case studies, then explores best practices for Copilot design, reporting, advanced LLM patterns, and the revolution of AI in search UIs. The section concludes with discussions on dynamic dashboards, anomaly detection, forecasting, and the cutting-edge concept of AI agents, providing a comprehensive toolkit for AI interface design.

Chapter 6: Case Study: What Made Sumo Copilot Successful? – A Blueprint for AI Product Success

Nudelman shares a detailed case study of Sumo Logic Copilot, a successful UX for AI project that defied the 85% failure rate of AI initiatives. He attributes its success not only to exceptional development and AI teams but also to several key UX aspects.

Key Success Factors of Sumo Copilot

Sumo Logic Copilot’s success was built upon five critical elements:

  1. Strong Use Case: The Copilot addressed a clear and immediate user need: simplifying Sumo Query Language (SQL) for powerful log searches. It was trained on over 2,000 custom queries to provide contextualized results and visualizations, making complex queries accessible to nontechnical users and junior analysts.
  2. Clear Vision: The project was driven by Sumo Logic’s CPO, Tej Redkar, whose “one thing” vision was to “never let the user leave Sumo empty-handed.” This clear objective guided design decisions, ensuring valuable insights were always easy to obtain.
  3. Dedicated Full-Screen UI: Unlike many Copilots integrated as side panels, Sumo Copilot was developed as a dedicated, custom full-screen experience. This provided ample screen real estate for complex data visualizations and allowed for powerful features like autocomplete, next-steps suggestions, and a restatement feature (echoing back interpretations and showing SQL queries), fostering user trust and learning.
  4. AI-Driven Autocomplete: A key differentiator, the autocomplete feature is powered by a powerful AI engine. It recommends initial starting points, provides suggestions, and even suggests source expressions. Diagonal arrows next to suggestions allow users to populate the query for editing, saving time and computational cost.
  5. Next-Steps Suggestions: These are highly customized natural language processing (NLP) queries driven by the user’s journey through the system. They are not pre-canned but respond to and continuously learn from user interactions, leveraging industry knowledge to present insightful exploration ideas.

Final Words on Sumo Copilot’s Success

Sumo Logic Copilot demonstrates that successful AI-driven projects require a convergence of technical excellence and effective UX practices. The strong use case, clear vision, dedicated UI, AI-driven autocomplete, and intelligent next-steps suggestions collectively contributed to its positive customer reception and value delivery.

Chapter 7: UX Best Practices for SaaS Copilot Design – Crafting Effective AI Assistants

This chapter delves into essential UX design best practices for creating functional and helpful Copilots in SaaS (Software as a Service) products, using Microsoft Security Copilot (MSC) as a primary example.

Real Estate Allocation: Matching UI to Task Importance

The amount of screen real estate a Copilot needs depends on the scope and importance of the task. Nudelman categorizes Copilots into three styles:

  • Side Panel: Integrated as an add-on button within a specific page, ideal for localized, page-level information. Nudelman recommends moving the parent page content over to avoid obscuring crucial information.
  • Large Overlay: A large panel used for more general questions or in-depth analysis. This is almost always the least desirable option as it obscures much of the parent page, making interaction awkward and leading to frequent reloads.
  • Full Page: A dedicated, custom UI (like Sumo Copilot or MSC’s full-page version) that can handle any level of task and deep analysis. It is the most flexible yet heavyweight option, requiring dedicated product navigation.

Core Features of a Successful SaaS Copilot

  • SaaS Copilot Is Stateful: Unlike ephemeral chatbots like Microsoft Bing Copilot, a SaaS Copilot (like MSC) maintains long-term memory, allowing users to have multiple overlapping conversations and pick up where they left off. This context retention is crucial for deeper, multistage analyses.
  • Specialized Fine-Tuned ChatGPT Model: MSC demonstrates the critical importance of training AI models on up-to-date, custom data. Fine-tuning, retrieval-augmented generation (RAG), and other methods significantly improve performance compared to stock LLMs, allowing access to specialized databases and real-time information.
  • Plug-Ins: Integrated Continuous Learning: MSC’s “plug-ins” are custom data feeds that supply real-time information from multiple systems directly to the LLM. This is a “complete game changer” as it overcomes the static data limitations of stock LLMs, enabling the Copilot to answer questions about very recent incidents.
  • The IA of the AI Is Straightforward, Focused on Chat: The information architecture (IA) for a typical Copilot is relatively simple: a landing page, a “My Sessions” (history) page, and chat-based investigation sessions. The core interaction is focused on the chat, making it the central terminal node.
  • Promptbooks: No Need to Twist into Pretzels to Write Prompts: MSC offers “Promptbooks”—premade recipes for common investigations—as a welcome alternative to complex prompt engineering. These provide specific, short queries in natural language that interact with custom data predictably, essential for stressful, time-sensitive security incident responses.

Design Exercise: Create Your Own Mobile Copilot

Readers are tasked with designing a mobile Copilot UI for their use case, brainstorming standard chat features, considering additional data sources for real-time awareness, defining the Copilot’s information architecture (especially if it needs to be stateful), planning promptbooks, and outlining on-screen UI controls. The “Life Clock Copilot” example demonstrates how a mobile app can leverage camera input for food analysis, offer health coach commentary, and maintain state for daily consumption tracking.

Chapter 8: Reporting—One of the Most Important Copilot Use Cases – Leveraging AI for Insights

Nudelman highlights reporting as one of the most powerful yet often overlooked LLM use cases. This chapter continues the exploration of Copilot design by focusing on how AI can revolutionize report generation, using Zoom AI Companion (ZAC) and Microsoft Security Copilot (MSC) as prime examples.

Zoom AI Companion: Effortless Meeting Summaries

ZAC provides an automated meeting summary directly from the transcript, with minimal setup controls. This “set it and forget it” approach delivers immediate benefits. ZAC can also answer specific questions about the meeting, providing action items and due dates. A “really cool and creative feature” is its UI Modality switch, which can transform meeting ideas into a digital whiteboard or brainstorming canvas, feeling like “magic” as AI automatically picks the right UI modality while maintaining context.

Microsoft Security Copilot: Specialized Reports

MSC offers two distinct reporting functionalities:

  • Executive Summary: This provides effortless automated documentation of security incidents with a simple prompt. Its key strength is generating reports in plain, understandable, jargon-free English, suitable for diverse stakeholders like leadership, auditors, and regulators.
  • Pinboard: This excellent feature allows users to create custom reports from manually selected data points from their investigations. This is crucial for security teams to focus the report only on relevant insights, avoiding lengthy, tedious narratives or AI hallucinations from irrelevant information. The pinboard ensures anyone joining an investigation can quickly grasp critical information.

Trade-Offs in Report Information Selection

Nudelman notes a significant divergence between ZAC and MSC in how information for reports is chosen. ZAC automatically filters non-relevant information (e.g., a fishing story in a meeting summary), while MSC’s Pinboard relies heavily on human security analysts to manually pick data. This difference likely stems from the varying importance and legal requirements of their reports: security reports demand utmost accuracy to avoid false positives/negatives, which are costly and legally sensitive. Design decisions for AI products must be based on a solid understanding of the costs and benefits of various trade-offs.

Security and Privacy Concerns

Nudelman stresses the paramount importance of security and privacy for high-end paid AI Copilot services. He questions whether AI tools are trained on user data (Zoom and Microsoft assure they are not) and highlights the risk of confidential information leaks. He suggests that Copilots should transparently address privacy concerns directly in the UI, as general distrust of AI often leads users to question data visibility by IT admins or company leadership. A clear stance on security and privacy might be the most demanded (and most often overlooked) feature in Copilot designs.

Design Exercise: Create Your Own Copilot Report

Readers are tasked with designing a Copilot report, considering what text to include/omit, whether editing is manual or automatic, who the report users are (and associated privacy concerns), the need for multiple report types (daily, weekly), and whether to augment text with graphs. The “Life Clock Copilot Report” example demonstrates daily and weekly summaries, including a concise overview of food and exercise, snarky “life coach” commentary, and the use of navigation menus as overlays to save time in wireframing.

Chapter 9: LLM Design Patterns – Enhancing Human-AI Interaction

Nudelman emphasizes that the discussion of Copilots and reporting wouldn’t be complete without exploring the key LLM design patterns that make them so useful. These patterns are critical for any UX context involving large language models (LLMs) or small language models (SLMs), as modern LLMs demonstrate a profound ability to “understand” context from disparate data sources.

Core LLM Design Patterns for Effective Interaction

Nudelman identifies seven critical patterns to ensure LLMs perform as intended:

  1. Restating: The AI tells the user what it understood as input, eliminating confusion. Example: Microsoft Power BI’s NLP Ask feature correctly interprets “where is 2017” as “2017 (order date).” Nudelman advises considering the impact of false positives when deciding whether to take immediate action after restating (e.g., confirm before sending an SMS).
  2. Auto-Complete: Provides correct concepts and vocabulary before confusion arises, “preponing” the restatement. Power BI offers sophisticated auto-complete suggestions based on data content and structure, even labeling “changeable” and “unmatched” fields. This leads to a streamlined and satisfying user experience by preventing missed queries.
  3. Talk-Back: Similar to restating, but with broader capabilities. Talk-Back explains what went wrong, asks additional questions, and suggests different exploration strategies. Ethan Mollick’s example with Claude AI (“Remove the squid”) demonstrates its sophisticated and verbose reasoning, requiring a chat interface to shine.
  4. Initial Suggestions: Recommendations displayed before the user takes any steps. These can be generic (like ChatGPT’s) or tuned to specific data types or previous conversations (like Power BI or Claude 3.5 Sonnet). Combining multiple types of suggestions (e.g., from past chats and data sources) can make AI feel like it’s “reading the user’s mind.”
  5. Next Steps: Suggestions that appear after a query is executed, inferring the user’s likely next question. LLMs can perform deep analysis on query results, identifying trends and anomalies beyond simple auto-suggestions. Nudelman cites Sumo Logic Copilot’s multi-type suggestions as an example of sophisticated next steps driven by data source and current query. Continuous retraining of the suggestions engine is a must.
  6. Regen Tweaks: Used in creative generation flows (e.g., Midjourney), where the output is assumed to be incorrect and needs rapid regeneration with slight modifications. Unlike chat/exploration flows (where AI temperature is “cold”), creative flows use a “hot” AI model temperature for variable, creative output. Tools like Vary (Subtle) and Vary (Strong) control LLM temperature and variation.
  7. Guardrails: Content moderation mechanisms that prevent LLMs from generating harmful content (hate speech, illegal activities). Nudelman notes that while LLMs “resist” giving forbidden information (e.g., a Molotov cocktail recipe), clever query engineering can often bypass them by providing plausible reasons (e.g., academic interest). He warns that there is simply no way to guarantee that some data will remain truly “private” once part of an LLM dataset.

Design Exercise: Try Out the LLM Patterns

Readers are instructed to augment their Copilot design by adding Restating, AutoComplete, Talk-Back, Initial Suggestions, Next Steps, and Guardrails, considering whether their flow is conversational (Next Steps) or creative regeneration (Regen Tweaks). The “Life Copilot Plus” example demonstrates sophisticated Initial Suggestions based on time of day, history, auto-complete for food entries, Restating for confirmation, and Guardrails that gently refuse inappropriate requests while suggesting alternatives.

Chapter 10: Search UX Revolution: LLM AI in Search UIs – Transforming Information Discovery

Nudelman emphasizes that LLMs are irrevocably changing the UX design of search UIs, highlighting their power beyond Copilots and reporting. This chapter explores how LLMs are revolutionizing information discovery by tackling problems traditional search engines struggle with.

The Current State of Search: Google vs. Amazon

Nudelman outlines two primary traditional search approaches:

  • Google Search: Characterized by a large, friendly search box, using fuzzy logic to match synonyms and keywords. It sorts results by relevance and “authority” and includes “answers” from authoritative sources. Its primary application is to quickly find reliable content.
  • Amazon Search: The backbone of e-commerce, focused on finding items to buy. It is characterized by facets, which are convenient filters allowing users to narrow down queries (e.g., Department, Review Stars).

The “Mysteries That Are Not Scary” Problem: A Traditional Search Challenge

Nudelman uses Jared Spool’s “Mysteries That Are Not Scary” query to illustrate the limitations of conventional search. Traditional engines struggle with “negative” or poorly defined queries because they look for matches, not mismatches. Google often relies on human-made guides for such queries. Amazon Search performs even worse due to constrained content inventory (books/movies, not guides on “scariness”) and a lack of specific search facets. This results in a “hodgepodge” of irrelevant or even terrifying results (e.g., Stephen King’s It when searching for non-scary mysteries). Things that are easy for humans are historically difficult for computers.

Enter LLMs: A Revolution in Search

Nudelman highlights how LLMs are overcoming these challenges, citing his past work with Associated Press (AP) Images. While conventional AP search yields empty results for “Mysteries That Are Not Scary,” the AI-Powered Search delivers relevant images (e.g., Indian Tibetan Dance, Sherlock Holmes Museum). He declares this seemingly tiny improvement as “nothing short of a revolution in search.” LLMs like ChatGPT easily solve the riddle, even providing specific movie recommendations that perfectly fit the “non-scary” criteria. Nudelman predicts that soon, customers will demand well-formed, custom e-commerce and content results fine-tuned to specific content and accurate answers to fuzzy queries.

Design Exercise: Design Your Own LLM Search UI

Readers are prompted to design their own LLM search UI, considering how to handle “fuzzy” queries like “Mysteries That Are Not Scary.” They must consider if LLM AI will kick in automatically or via a switch, if its output will differ from regular search, and if facets will be useful. The “Life Copilot LLM Search” example demonstrates how the app can respond to a fuzzy query like “suggest a healthy cocktail recipe with lime,” providing a relevant list of non-alcoholic options, showcasing the impressive capability of LLM-based search.

Chapter 11: AI-Search Part 2: “Eye Meat” and DOI Sort Algorithms – Optimizing Content Presentation

Building on the previous chapter’s discussion of LLM search, Nudelman explores dynamic dashboards (“eye meat”) and DOI (degree of interest) sort algorithms—two critical applications of AI for sorting and displaying large quantities of content in a way that aligns with a particular customer’s interests.

What Are Dynamic Dashboards? Visualizing User Interest

Dynamic dashboards, referred to by Edward Tufte as “visual confections” and John Maeda as “eye meat,” are the platforms on which most digital experiences unfold. Nudelman notes that “figuring out what a particular customer wants to look at next is a tough problem.” He illustrates this with Amazon.com’s homepage, showing how its AI-driven recommendations often get items “wrong” (e.g., cat food for a dog person, excessive V for Vendetta posters). Similarly, Google’s dynamic dashboards for “Jungle Book” search show different results on mobile versus desktop, and present “movies about bears,” which make “a weird sort of machine sense.” These dashboards are often deliberately vague and designed to be playgrounds for AI.

Beware of Bias in AI Recommendations: A Critical Concern

Nudelman provides a stark warning about bias in AI recommendations, using Google’s search results for “presidential candidates” in 2016 and 2024. In 2016, Google stubbornly showed Bernie Sanders even after party nominations were secured. In 2024, Kamala Harris’s image was almost entirely absent from the first few pages of search results, only appearing on page 6 alongside irrelevant content like the American Library Association president. This demonstrates how AI bias can signal a candidate’s perceived irrelevance. Nudelman stresses the urgent and critical need to be keenly aware of AI bias when using AI to construct visual dashboards or sort search results.

DOI: Degree of Interest/Sort Algorithms: Shaping What Users See

DOI algorithms control the sort order of items displayed to the user, determining whether content appears on the first page or is relegated to later pages. Nudelman provides an example of using DOI to feature a trending hashtag based on web views and slope of growth, while still considering established, high-performing topics. He notes that a typical sort normally has two or more different algorithms working together to determine the overall order, often involving “secret proprietary algorithms.” He urges UX professionals to “get curious” and ask tough questions about how selections are made, how many algorithms are involved, and how they contribute to company revenue and user engagement.

The Impact of Algorithms on Society

Nudelman highlights the critical role Facebook’s sort algorithm played in spreading misinformation leading to the January 6, 2020, U.S. Capitol attack. He cites research showing Facebook’s algorithm biases towards extremes, creating political news bubbles. While removing algorithmic ranking (e.g., sorting by reverse chronological order) reduces political engagement, it also “curtailed the amount of time people spent on the platform,” impacting Meta’s revenue. This illustrates the complex trade-offs between engagement and societal impact. Nudelman concludes that the importance of UX involvement in understanding AI algorithms and their effects on user behavior cannot be overstated.

Design Exercise: Create Your Own Dynamic Dashboards and Sort UI

Readers are tasked with designing dynamic dashboards and sort UIs for their application’s content. They must consider if a “visual smorgasbord” or sorted list is needed, the types of sections/rubrics to display, effective content display methods (tiles, carousels), criteria for AI to select and order rubrics, data needs for training, and various DOI sort order algorithms (popularity, recency). Crucially, they must also consider the dangers and ethical implications of a particular sort order.

Chapter 12: Modern Information Architecture for AI-First Applications – Structuring Intelligence

Nudelman addresses the misconception that Information Architecture (IA) is dead in the age of AI chat. He argues that while “chat is the new command line,” a chat alone is often insufficient for a complete application. He introduces a new AI-first Information Architecture framework designed to transcend the limitations of chat and provide a comprehensive user experience.

Design Pattern du Jour: The Canvas – Beyond Chat

Nudelman observes that the Canvas pattern, introduced by ChatGPT, is currently “all the rage.” However, he cautions that even with a dynamic canvas, applications still require traditional IA elements like personality edit screens, welcome experiences, and payment history. He emphasizes that what is good for ChatGPT is not necessarily appropriate for all SaaS or e-commerce applications and that “now is not the time to copy—it is the time to invent.”

Is Information Architecture Dead? No, It’s More Crucial Than Ever

Nudelman contends that IA is essential for AI-first applications. Without it, customers struggle to understand what an app does or its value, and they need help forming queries that deliver maximum benefit. He notes that giving instructions to AI is “really hard” and that predetermined queries and starting points are often necessary. He introduces his AI-first Information Architecture framework by comparing the conventional Amazon.com with a reimagined AI-first version.

AI-First Amazon.com Redesign: A New IA Framework

Nudelman proposes an AI-first Amazon.com redesign based on five core page types: Analysis Overview, Category Analysis, LLM Search Results, Item Detail, and Maintenance.

  • AI-First Analysis Overview Page (Homepage Replacement): Replaces the old homepage with dynamic content driven by an LLM text summary. For example, a Black Friday analysis page could highlight unique shopping incentives and gift categories based on current fears.
  • AI-First Category Analysis Pages: LLMs understand user interests and generate personalized category pages. Nudelman demonstrates with a “Fishing Category page” where AI (ChatGPT and Midjourney) analyzes past orders to suggest specific items (e.g., another fishing rod) and generates a custom Hero Image that “tells a story” about the adventure of fishing, creating an “emotional response.” This is what “AI-first design” means.
  • AI-First LLM Search: Nudelman reiterates that LLMs easily handle complex NLP queries like “Mysteries That Are Not Scary,” providing playful and precise machine interpretations of queries. The results include a “Scariness Rating” (a custom metric), item summaries, and augmented facets. This search makes an effort to “tell the story in the context of the customer’s question.”
  • AI-First Item Detail: Moves beyond conventional item details with separate search boxes. An all-in-one “ask bar” contextualized to the specific item (e.g., using an editable orange tag) allows users to ask questions about item specifics or even visually demonstrate product fit. It includes item-specific, user-specific LLM summaries that anticipate user concerns (e.g., wide feet for shoes) and user-specific Next Steps questions. Nudelman cautions that “not everything needs to be AI,” citing Netflix’s simple user profile selection as an example.
  • AI-First Maintenance Pages: LLMs can provide AI-driven guidance and troubleshooting for settings (e.g., validating addresses) and act as independent AI agents to answer order-related questions in a conversational Q&A format.

Long Live Information Architecture!

Nudelman concludes that his framework does not discard old IA wisdom but rather reimagines it for an AI-first world. The AI-first application makes an effort to “tell the story in the context of the customer’s needs,” offering individualized content, explanations, ratings, and filters. This approach, while requiring investment in tech stack, delivers a superior experience.

Generative UI = Individualized UX

Jakob Nielsen predicts a “second-generation generative UI” where the user interface is generated afresh every time a user accesses an app, leading to drastically different designs for different users. This allows for genuine adaptation to disabled users, beginners, and experts. UX designers will no longer design exact UIs but specify the rules and heuristics AI uses to generate the UI. Nielsen compares this shift to responsive web design, where pixel-perfect control was lost but broader adaptability gained. For blind users, generative UI can create optimized one-dimensional auditory interfaces that are more concise.

Sentient Design and the Intelligent Interface

Josh Clark posits that AI should be seen as a “material” woven into the fabric of digital interfaces, leading to “sentient design”—experiences that feel almost self-aware and radically adaptive to user needs. These AI-mediated experiences are conceived and compiled in real time, following the user’s intent. Beyond chat, this extends to context-aware dashboards (Salesforce) and AI-generated interfaces or applications (Apple’s Math Notes, Claude’s Artifact feature). Sentient design focuses on delivering meaningful human outcomes by collapsing the effort needed to achieve them.

Chapter 13: Forecasting with Line Graphs – Predicting Future Trends

Nudelman delves into forecasting as one of the most important uses of AI, emphasizing its application beyond sales and weather to areas like weight loss, product demand, and more. He highlights how AI can enhance this ancient practice, despite its often simple visual representation as a line graph.

Understanding Linear Regression and R-Squared

Forecasting often involves linear regression, drawing a straight line through data points to predict future values. The “goodness” of this fit is measured by R-squared, a number between 0 and 1, where higher values indicate a better fit and more trustworthy predictions. Nudelman uses the “cone of shame” analogy for confidence intervals, which visually indicate the possible range of a forecast, with wider cones signifying more uncertainty. He stresses that R-squared does not indicate direction, which can be critical (e.g., underestimating food for a North Pole journey is catastrophic, overestimating is a minor inconvenience). In such cases, using R (which accounts for direction) might be more appropriate.

Forecasting with AI: Beyond Simple Math

While simple math often suffices for forecasting, Nudelman focuses on techniques where AI/ML methods are crucial:

  • Nonlinear Regression: Few things in nature have a true linear relationship. For data that curves (like chlorine degradation over time), nonlinear regression, often determined by AI/ML algorithms, provides a better fit. UX designers must ask questions to ensure the predicted curve matches physical reality and avoids “hallucinations” (e.g., chlorine increasing over time).
  • Seasonality: AI/ML models are essential for predicting patterns with weekly, monthly, or yearly seasonality (e.g., website traffic peaks during working hours or holiday spikes). Understanding underlying forces and accounting for the differential consequences of overshooting vs. undershooting (e.g., higher AWS bill vs. website crash) is critical. AI-based models can integrate environmental factors (like precipitation for irrigation) to maximize accuracy and profit.

Forecasting an Aggregate Variable: Bar Graphs for Clarity

For aggregate variables (e.g., daily water consumption), a bar graph is a better choice for forecasting than a line graph. Nudelman provides an example of AI forecasting weekly water demand, noting how AI can account for seasonality and environmental factors to optimize consumption and cost. He underscores that AI is crucial for accurate demand forecasts that simple algorithms cannot achieve.

Design Exercise: Design Your Own Forecasting UI

Readers are tasked with designing forecasting UIs for their projects. They must identify three variables for line graph forecasting, considering their impact on customer decisions and the consequences of overshooting/undershooting predictions. They also need to identify three aggregate variables for bar graph forecasting, considering ideal training factors, available data, and missing data sources. The “Life Clock Forecasting” example illustrates a line graph for lifespan prediction (with confidence interval) and a bar graph showing weekly aggregate metrics (calorie intake, exercise, lifetime increase/decrease), highlighting how these visuals can drive behavior change.

Chapter 14: Designing for Anomaly Detection – Spotting the Unusual

Nudelman introduces anomaly detection as a vital AI application for identifying unusual patterns in data, providing recommendations for designing user interfaces for these use cases. Mastering this chapter enables UX designers and product managers to have high-quality, detailed conversations with data science and engineering colleagues.

Why Is Detecting Anomalies Important? Real-World Applications

Anomaly detection is crucial for:

  • Critical Production Issues: (e.g., sudden drop in signal strength in telecom)
  • Quality Control and Assurance: (e.g., gadget measurement deviations in manufacturing)
  • Security and Fraud Detection: (e.g., unusual credit card activity)
  • Early Warning Systems: (e.g., predicting industrial machinery failure via vibration)
  • Improving Decision-Making: (e.g., capitalizing on “happy” traffic spikes in e-commerce)
  • Compliance and Regulation: (e.g., ensuring product quality in pharmaceuticals)

Four Main Anomaly Types: Understanding the Nuances

Nudelman categorizes anomalies based on Andrew Maguire’s classification:

  1. Point Anomaly: Brief “spikes” exceeding a static or dynamic threshold (e.g., CPU Busy Percent spike).
    • Static thresholds are fixed limits.
    • Dynamic thresholds (like Bollinger Bands) adjust based on moving averages and standard deviations, reducing false positives for fluctuating metrics (e.g., network traffic volume). UX must work with experts to determine appropriate thresholds, as dynamic thresholds are not suitable for all measurements (e.g., compliance metrics).
    • A single point anomaly is often not significant; alerts are often triggered by multiple occurrences within a time period.
  2. Change Point Anomaly: A sustained, unexpected change that remains over time (e.g., value exceeding a threshold for over a minute). UI design for these differentiates from point anomalies by focusing on duration of the change. Both can often be covered in a single UI with an “occurrence timer” or duration setting.
  3. Contextual Anomaly: “Shape change over time” anomalies, where the behavior of a variable deviates from an expected pattern (e.g., unseasonable traffic spike/drop, slow drift). These are often detected by AI/ML methods.
    • Seasonal shape anomaly detection (e.g., Jepto’s UI) allows configuration of time period and direction.
    • Opportunities for Improvement: Nudelman suggests automatic periodicity selection, self-balancing algorithms to reduce false positives, intuitive sensitivity sliders (instead of percentiles), and integrating anomaly recommendations into a Copilot chat.
  4. Curve Shape Anomaly: Detection relies purely on AI recognizing gradual changes in a curve’s shape (e.g., Horse-Head oil pump’s “Downhole Dynacard”). AI can be trained to recognize shapes (e.g., “pounding”) and trigger autonomous actions (e.g., stopping the pump). The UI for controlling this is simple, focusing on analysis frequency, confidence thresholds, and actions.

Shorthand UX Design Notation as AI Prompt: Automating “Robot Monkey Work”

Nudelman introduces “Shorthand UX Design Notation”—a text-based notation for common UI components (input fields, dropdowns, checkboxes) developed over 20 years ago. This notation, which looks like [Fluid Pound] //input field or [10 minutes \/ ] //drop-down, can now be used as a prompt for AI to directly create functional React code. This automates “Robot Monkey Work” (repetitive, boring tasks like redrawing forms and tables), freeing designers for human work like empathy, orchestration, and innovation. He illustrates with the Horse-Head pump form and table, showing how Claude AI can fix abbreviations and errors, and generate code, acting as a cost-effective RAG (retrieval-augmented generation) example.

Design Exercise: Create Your Own Anomaly Detection UI

Readers are prompted to design anomaly detection UIs for their system, identifying metrics for point, change point, contextual, and curve shape anomalies. They must consider static/dynamic thresholds, Bollinger Bands, duration settings, periodicity, and actions. The “Life Clock Anomaly Detection UI” example shows how various anomalies (caloric intake spike, prolonged wakeful period, low exercise time, body composition change) can be listed in a single “Anomalies” list for discussion and testing.

PART 3: Research for AI Projects – Elevating User-Centered Design in the AI Era

Part 3 of “UX for AI” introduces essential techniques for user research within AI projects, arguing that AI necessitates a new, flexible, and continuously evolving user-centered design process. It begins with a case study demonstrating a powerful brainstorming method, then details the “new normal” AI-inclusive process, delves into specific AI-driven research techniques, and concludes with a deep dive into the RITE (Rapid Iterative Testing and Evaluation) methodology.

Chapter 16: Case Study: MUSE/Disciplined Brainstorming – Unleashing Novel Ideas

Nudelman uses a fictional AI assistant writing app, MUSE (Machine-Underpinned Sidekick Engrosser), to demonstrate the “bookending” design brainstorming method. This method, an extension of Leah Buley’s “disciplined brainstorming,” aims to quickly brainstorm practical design approaches by exploring an idea as far as it can comfortably go (“bookend”) before pivoting to a new direction.

Exploring Design Ideas Through Bookending

Nudelman illustrates the process with five design ideas for MUSE:

  • Design Idea #1 (Side-Panel Copilot): Inspired by Chapter 7, an AI assistant “lives” in a Word document’s side panel, offering ideas and inserting responses. The idea is taken “as far as it goes” when discussions turn to panel placement or button colors.
  • Design Idea #2 (GitHub Copilot Inspired): AI prompts are embedded directly in the text via comments, and AI inserts output below the comment. This allows for a more natural interaction within the writing editor.
  • Design Idea #3 (Writer’s Block Autorecommendation): A variation where AI reads text and predicts what comes next when the writer pauses. This has the advantage of learning directly from previous pieces and fine-tuning suggestions based on user picks. This idea is pushed to the “bookend” when considering anthropomorphized AI personalities.
  • Design Idea #4 (Dedicated UI like ChatGPT): A full-screen AI UI starting with simple writing prompts. This expands to include canvas features for writing entire books, but the pivot occurs before getting too deep into those features.
  • Design Idea #5 (Scrivener/Grammarly GO Hybrid): Combining Scrivener’s “snippet” card management with an AI writing assistant. Users write AI prompts as card titles, AI writes content inside, and offers auto-suggestions. AI can make additional suggestions to the story’s overall flow, creating new “purple” cards to fill narrative holes. This idea is novel and potentially patentable, achieved by reformatting existing product ideas to leverage AI strengths.

The Power of Bookending

Nudelman highlights that the bookending method is powerful because it encourages sketching multiple designs quickly, avoids anchoring on a single idea, and helps stumble upon novel and interesting solutions by leveraging existing product ideas and AI’s strengths. He encourages readers to embrace this method to design novel UX for AI-driven products.

Design Exercise: Create Your Novel Designs Using Bookending

Readers are challenged to apply the bookending/disciplined brainstorming method to their own AI-driven project, generating novel designs for future RITE user testing. Tips include drawing inspiration from similar products, various brands (Apple, Facebook, Amazon), mythical characters (Ironman, Star Trek), and different UI modalities (toy dog, kiosk, command line). The key is to sketch quickly, recycle prototype parts, and avoid getting stuck on minor details. The “Life Clock” design examples throughout the book are presented as existing applications of this method.

Chapter 17: The New Normal: AI-Inclusive User-Centered Design Process – Adapting to the AI Era

Nudelman asserts that AI demands a new level of rapid, flexible, user-centered thinking and rapid adjustment—a new process that continuously up-levels the three pillars of AI-driven designs: user interface, AI model, and data. He presents a new design process diagram to guide teams in this “new normal.”

The Evolution of UX Process Diagrams

Traditional linear UX process diagrams (definition -> ideation -> prototyping -> testing -> release) are dismissed as “bullshit,” as the “real” UX process is messy and cyclical (e.g., RITE). However, Nudelman argues that even these cyclical diagrams fail to capture the reality of designing AI-driven products because they hark back to an industrial age where “idea people” were separated from “implementation people,” and where UX had the luxury of focusing solely on design. With AI, this process is now flipped on its head.

The Monkey or the Pedestal? Iterating UI, AI, and Data Together

Nudelman uses the analogy of training a monkey to stand on a pedestal and recite Shakespeare. He refutes Dr. Astro Teller’s “tackle the monkey first” advice, stating that in AI, the UI is the pedestal, the AI is the monkey, and the data is the Shakespeare. The core insight is: “You have to do all of them together. The new AI-involved process is a continuous iteration of UI, AI, and data, in combination, aimed toward rapid product release.”

A New Way of User-Centered Thinking: The AI-Inclusive Process

The new AI-inclusive user-centered process involves a rapid iterative problem definition and solution development cycle. This cycle is similar to traditional RITE but adds a periodic AI model “spike.”

  • What the Heck Is a Spike?: A spike is a quick, rough, proof-of-concept project designed to demonstrate feasibility (e.g., a simple Python notebook for a bare-bones AI I/O experience). Its purpose is to quickly nail down the problem definition and provide a “proof of concept” solution.
  • What Is the Role of Data?: “AI” is actually made up of two interconnected parts: AI model and data. Data trains and validates the AI model. As UX design iterates, new requirements for the AI model emerge, which in turn require new data (potentially missing, biased, or legally constrained), affecting UX viability and leading to further design work and AI model re-spiking.
  • Where Is the Customer in All This?: The customer remains at the center, with customer feedback brought much more forward in the development process.
  • Why Is This Change Necessary?: AI output is highly interactive and tedious to mock up in traditional tools like Figma. Figma prototypes are “pale imitations” of the real thing. To make user evaluation realistic, AI output must be experienced “live” from an actual AI model (e.g., a Python notebook). This creates a modern “Wizard of Oz” user testing approach, providing more precise and realistic insights by having customers brainstorm questions, testing the spiked AI model’s response, and discussing missing data or ethical implications.

How Does This Affect the Role of UX? The “Glue” Aspect

The traditional UX role of removing barriers, innovating, and competitive analysis remains. However, the “glue” aspect of UX—tying together customers, business, and technology—becomes increasingly important due to the unknown and unpredictable nature of AI technology. UX-driven efficiencies, rapid prototyping, and lean decision-making also gain prominence. Nudelman stresses that UX professionals must learn about AI to ask good questions.

Final Handoff to Dev and Continuous Learning

The handoff to development becomes an explicit step, separating exploratory “spike” development from production efforts. UX provides detailed guidance, thinking of the dev team as their customer. Crucially, AI-driven products are “never actually ‘done’”; they are trained and continuously learn from user input and feedback after release, with the pace of learning accelerating.

Many More Changes to Come: The New Normal

Nudelman concludes that the AI-inclusive user-centered process, with its continuous rapid adjustment that revolves around the user in a loop running between the UI mockup, AI model, and data, will be key to successfully leading AI-driven projects with UX. This requires UX professionals to embrace the rapid evolution and complexities of AI.

Chapter 18: AI and UX Research – The Evolving Landscape of UX Practice

Nudelman explores the profound impact of AI on UX research, addressing questions about job security for researchers and the skills that will be most in demand. He categorizes UX research techniques into four areas: automated, augmented, increasingly valuable, and “AI Bullshit.”

UX Techniques That Will Likely See Full Automation

Activities that involve routinely creating and processing textual information will be the first to be fully automated:

  • Routine Usability Studies: Will be mostly automated, a trend already visible. Nudelman argues that RITE (Rapid Iterative Testing and Evaluation) methodology will almost entirely eclipse traditional usability studies as a more strategic and productive alternative.
  • Routine NPS Studies and Surveys: Will require diminishing human intervention for question writing, data analysis, and report generation.
  • Collecting and Organizing Research Data: Will be radically altered, with AI capable of collating, reporting, creating affinity diagrams, and generating executive strategy insights from vast databases of qualitative data.
  • Triangulation of Quantitative and Qualitative Insights: This “holy grail” will become the norm for every new project, minimizing irresponsible spending on pet projects.

UX Techniques That Will Be Radically Augmented

AI tools will significantly increase speed and efficiency by augmenting current processes:

  • Competitive Analysis: AI will rapidly mine screenshots, video frames, and voiceovers to reverse-engineer functionality and speed up reporting. It will be AI-augmented, with humans and machines collaborating closely.
  • Identification of Novel Use Cases: AI tools will heavily augment the process of finding lucrative market opportunities and niche offerings, becoming standard for business requirements documents.
  • RITE Studies: Will be radically and permanently altered, with researchers and designers leveraging AI to become “AI whisperers” who proficiently use AI-augmentation technology.

UX Techniques That Will Become Increasingly Valuable

Skills that AI struggles to understand and simulate will drastically increase in value:

  • Core Skills (Human-Centric): “Soft skills” are becoming “core skills.” The “three in a box” model (Devs, PMs, UX) is evolving into “four in a box” (adding data scientists and AI specialists). Consensus building, negotiation, and making people feel good while working together will become even more prominent.
  • Workshop Facilitation: Unlikely to be automated or augmented, as it involves developing novel ideas and driving consensus from conflicting opinions.
  • Formative Research, Field Studies, Ethnography, and Direct Observation: These fundamental observation-based techniques will be very hard for AI to augment or replace, especially for hands-on professions and complex human interactions.
  • Vision Prototyping: A key technique for synthesizing research, market needs, and imagination to create prototypes of novel products. It involves creating something new that is difficult for AI to model or automate.
  • Augmenting the Executive Strategy: UX staff who can leverage business and technology understanding to synthesize insights into novel solutions will be in high demand, providing multidisciplinary analysis.

AI Bullshit: Pitfalls to Avoid

Nudelman identifies specific AI applications for UX research that are “far-fetched, oversold, or run contrary to foundational principles”:

  • AI Strategic Analysis Tools That Replace Humans in Coming Up with Novel Ideas and Business Use Cases: AI cannot replace CPOs or UX directors. “Adopting AI decisions instead of human decisions is a dangerous and costly assumption” that guarantees building products for robots.
  • AI Heuristics Analysis Replacing User Research and Design: Claiming simple ML functions remove the need for user testing or replace designers is “pure BS.” Heuristics are guides, not replacements for “real-world constraints” requiring human teams and user research.
  • AI Acting as “Synthetic Users” for the Purposes of Usability Research: This is a “cockamamie idea” gaining traction. “Replacing actual user studies with AI models will guarantee that you will build products for robots, not for actual customers.” He cites Baymard Institute’s finding of an 80% error rate for ChatGPT-4 in UX audits.
  • Build Your Persona Using AI: The value of persona-building lies in consensus-building, discussion, and team education, which AI cannot shortcut.

Navigating the Abyss: The Dark Side of Synthetic AI User Research Tools

Nudelman critiques tools like Synthetic Users (“User Research. Without the Users.”). He argues that if you only talk to robots while designing a system, you will end up designing a system for robots to use. The purpose of talking to people is to understand what they find useful and joyful. He cites Jakob Nielsen, who states AI cannot substitute for user research with real users and that AI tools mimic “typical” behaviors, missing diverse user backgrounds. Pavel Samsonov adds: “There is one more very important difference between an LLM and a customer: The LLM can’t buy your product.”

When AI Adds Value to Research, and When It Wreaks Havoc

Kathryn E. Campbell notes that Gen AI can replace a “moderately sharp research intern” for tasks like comprehensive literature reviews, summarizing interview transcripts, or categorizing images from diary studies. These tasks can save hundreds of hours. However, AI fails in several critical areas:

  • Western Bias: AI over-relies on English content, missing regional competitors or underrepresenting marginalized groups.
  • Nuanced Language: It struggles with humor, allegory, or sarcasm, leading to literal and embarrassing interpretations.
  • Hallucinations: AI still regularly references bad sources, confuses concepts, and generates false information (23% of organizations experienced negative consequences due to GenAI inaccuracy in a McKinsey survey).
    Campbell stresses that AI cannot replace human skills like contextual understanding (e.g., parent in room during teen interview), observation (noting discrepancies between participant’s words and behavior), or “Aha moments” (identifying unexpected outliers and rapidly forming new hypotheses). While AI can make researchers faster and more productive, it won’t make them obsolete.

Chapter 19: RITE, the Cornerstone of Your AI Research – Rapid Iteration for Success

Nudelman advocates strongly for RITE (Rapid Iterative Testing and Evaluation) studies over traditional usability tests for designing AI-driven products. He asserts that RITE is the only methodology he’s experienced that consistently yields more delightful, usable, and successful AI-driven products in less time.

RITE Study vs. Usability Test: A Fundamental Difference in Focus

Nudelman highlights key differences:

  • RITE Studies Form the Core of the Design Process; Usability Tests Are Often Treated as QA: Traditional usability tests are often seen as expensive, optional, and conducted late in the process, functioning as elaborate QA. This means fundamental issues (wrong use case, insufficient data, misaligned AI model) are already “baked in” and cannot be changed. In contrast, RITE studies are conducted as early as possible and are integral to the design process, allowing for fundamental changes to UI, AI model, and data as needed.
  • RITE Studies Demand the Simplest Appropriate Prototypes That Change Rapidly; Usability Tests Often Mean Fancy Rigid Prototypes: RITE prototypes are rough (sticky notes, simple Figma click-throughs, or Python notebook AI output), offering just enough detail to answer specific UX questions, including the crucial “is this project even worth doing?” question. Rough prototypes invite change; fancy prototypes prohibit it. RITE participants and moderators are comfortable co-creating and brainstorming ideas on the fly, allowing for immediate prototype updates.
  • RITE Studies Produce Solutions; Usability Tests Produce Reports: Traditional usability tests generate reports, often adversarial and less useful in the rapidly evolving AI industry where solid best practices are still emerging. RITE, instead, focuses on getting continuous feedback and iterating rapidly to a solution that actually works. Nudelman rarely videotapes or provides elaborate reports; the product of a RITE study is the improved design solution itself.

The Fringe Benefits of RITE Studies

RITE studies inherently fit Agile/Scrum projects and are less adversarial due to their focus on solutions rather than inflammatory reports. This approach helps build effective AI-driven design teams, fostering real-time, cross-functional collaboration where the entire 4-in-the-box team focuses on creative problem-solving.

How to Conduct a RITE Study: Practical Steps for Rapid Iteration

Nudelman outlines the process:

  • Start Small: Begin with just 1–3 screens, focusing on the most essential use case.
  • Keep It Short: Conduct quick 10–15-minute conversations with real customers, using the prototype as a conversation starter. This low investment encourages early and continuous UX engagement.
  • Open-Ended Inquiry: Start by setting the scene and asking for initial impressions. Then, ask “What would you want to do next?” to encourage natural exploration.
  • Real-Time Co-Creation: If the next screen isn’t ready, live-sketch the desired design with the customer. This rapid, real-time co-creation is invaluable for AI-driven product design. Use a document camera for remote sessions.
  • Probe and Pivot: After co-creation, present prepared wireframes to compare and elicit preferences (faster/easier/more enjoyable). Be flexible to investigate interesting remarks and sketch new designs on the call.
  • The Million-Dollar Question: Always ask: “Would you pay for this app? If so, how much?” and explore monetization details.
  • Multiple Rounds: Subsequent rounds follow the same structure but delve deeper, potentially adding foundational pages (Analysis Overview, LLM Search, etc.).
  • Expected Outcomes: After a few rounds (10–30 minutes each with 4–5 customers per round), you should have a validated primary use case, 1–2 refined design options, rough key screen drawings, monetization validation, MVP scope, and a better understanding of digital twin inputs/outputs and AI model design.

The RITE Design Evolution and AI-Assisted Future

At the end of a 2–3 week RITE study, teams should have a single validated design direction, rough key screens, identified essential functions, and validated monetization, MVP scope, and AI model design with the 4-in-the-box team. This ensures everyone is aligned before final high-definition screens and development handoff.
Nudelman envisions an AI-assisted RITE methodology in the near future. AI will enable real-time design iteration during research sessions, generating alternative page designs or flows based on live participant feedback and researcher prompts (like Midjourney’s /imagine). This will compress research and design cycles to near-real-time co-creation, outputting fully developed flows directly implemented in React code. The “picture step” of Figma wireframes will be bypassed. The key skill will be the intuition of picking the right direction and giving accurate prompts to AI.

The AI Advancements That Are Changing the Way We Design

Greg Aper details AI advancements that are revolutionizing design:

  • LLM Memory, GPTs, and Knowledge: ChatGPT’s improved memory allows it to consistently consider context across conversations, transforming it from “viable for design tasks to transformational.” This enables cross-referencing multiple data types and subject matters, opening new paradigms for ideation and analysis.
  • UI Design with AI: Text-to-image tools (Midjourney) can now create valuable UI concepts with proper aspect ratios, color palettes, and text. AI wireframing tools (Relume) bridge UX and UI design, allowing quick transitions from chat-generated narratives to professional mid-fidelity sitemaps and responsive wireframes. Figma integrations allow direct insertion of AI-generated visuals.
  • Scene, Style, and Character Consistency: Midjourney’s image reference, style reference (--sref), and character reference (--cref) features provide high-precision repeatability, crucial for creating consistent visual narratives for personas, activities, and environments. This allows designers to illustrate users’ challenges and aspirational goals directly.
    Aper predicts AI video tools will become workhorses for designers, and GPTs will become personalized co-collaborators. He encourages designers to explore AI’s possibilities and see it as a “complimentary partner” that amplifies natural talents.

PART 4: Bias and Ethics – Navigating the Responsible Development of AI

Part 4 of “UX for AI” addresses the crucial topics of AI bias and ethics, which are integral to responsible AI product design. It begins with a powerful case study demonstrating how vision prototyping can uncover uncomfortable truths and biases. This section includes multiple perspectives from UX luminaries, offering practical approaches to evaluate bias, discuss ethical dilemmas, and preserve human creativity. The book concludes with a call to action for UX professionals to lead in ensuring AI benefits humanity and the planet.

Chapter 20: Case Study: Asking Tough Questions Through Vision Prototyping – Uncovering Hidden Truths

Nudelman presents a case study of a large industrial company struggling to accurately measure pipe thickness for corrosion detection. The company spent years perfecting cheap sensors to mimic expensive manual processes, based on a simple tactical UI (time on X-axis, pipe thickness on Y-axis). They lacked a vision for what to do after accurate measurement, and crucially, were unwilling to ask uncomfortable questions.

The Core Problem: Precision, Not Accuracy

Nudelman’s research revealed that customers didn’t care about the sensors’ accuracy in determining absolute pipe thickness (government mandates covered that). Instead, they cared about the rate of degradation and how to decrease corrosion (e.g., with pipe coatings). This meant customers cared about precision, not accuracy. The company’s cheap sensors were already precise in measuring the rate of change, but not accurate in absolute thickness. This “match made in heaven” with their existing pipe coating business revealed a clear value for AI.

The Solution: Vision Prototyping to Drive Strategic Insight

Nudelman designed a new vision prototype that displayed different scenarios of pipe life expectancy using various coatings and preprocessing. AI would predict outcomes for manually entered scenarios and suggest optimal actions. This UI led to several patents and advanced the company beyond competitors. The success stemmed from using vision prototyping to ask uncomfortable questions and challenge the status quo.

Vision Prototyping Best Practices:

  • Play: Use humor and “goofy questions” to gently challenge sacred cows (e.g., “Just what the hell is the AI supposed to predict?”).
  • Imagine and Walk Through Scenarios: Use “bookending” to quickly sketch and explore multiple design approaches without getting stuck.
  • Set Your Ego Aside and Get Genuinely Curious: Lead explorations lightheartedly, patiently, and persistently, showing consideration and respect.
  • Assume Realistic Constraints: Use constraints (e.g., precise rate of change data was available, but accurate thickness was not) as fuel for creativity.
  • Play the “Omnipotent AI” Game: Ask “What if you had the almighty AI? What would you be able to do with it?” to uncover true value.
  • Ask, “What Would Make it Valuable?”: Continuously ask this, then “What would give us that information?” to drill down to the core problem and AI’s role.
  • Follow the Data: Focus on who has the data needed for training and how to acquire it, using tools like the Digital Twin exercise.
  • Go into the Field: Conduct firsthand field research to understand real-world challenges.
  • It Takes a Village: Leverage crises to bring together diverse perspectives (SMEs, business units) for co-creation and participatory design.

Vision Prototyping Mistakes to Avoid:

  • Don’t Aim Too Close: Vision prototypes should look 1-2 years out, not 2-3 sprints.
  • Don’t Just Show a Bunch of Screens: Build the prototype as a flow to solve a specific problem end-to-end, showing the final customer benefit.
  • Don’t Just Drop a Figma Prototype: The ideal delivery is a 1-2 minute video with a voiceover demonstrating the use case and value proposition.
  • Don’t Lorem Ipsum: Content must be authentic, realistic, and numerically accurate (leveraging LLMs for realism).
  • Don’t Worry About Every Possible Corner Case: Focus on 1-2 primary use cases.
  • Don’t Confuse Prototype with Implementation: Dream big in prototyping, but build MVP simply and iterate after validation.
  • Don’t Get Too Attached: Be prepared for failure; focus on learning and moving on.

Nudelman concludes that strategic discussions of bias and ethics are tied to ideation and vision prototyping. UX professionals are uniquely positioned to address complex and sensitive topics by integrating them into creative, positive endeavors.

Chapter 21: All AI Is Biased – Recognizing and Addressing Systemic Bias

Nudelman asserts that all AI is inherently biased, and this pervasive bias is alarming, persistent, and worsening. He emphasizes the urgent need for UX professionals to raise awareness and provides practical approaches to evaluate AI bias and ask better questions for more balanced and diverse AI responses.

The Pervasive Nature of AI Bias

Nudelman demonstrates AI bias using Midjourney image generation queries:

  • “Biologist”: Yields a vast majority of white males, with a single female figure (4% representation). Statistically, one is as likely to get a frog biologist as a woman biologist, despite US labor statistics showing 8% more female biologists than male.
  • “Basketball Player”: Predominantly generates Black males, with only one female figure.
  • “Depressed Person”: Mostly generates white women.

Nudelman critiques the absurdity of these biases and the underlying fixation on binary gender identities, ignoring the 72 other recognized gender identities.

The Core Principle: Assume All AI Is Biased

The key takeaway is: “Always assume that all AI is biased. And figure out how that bias will impact the experience.”

Survivor Bias: Looking for What Is Not There

Nudelman introduces survivor bias with the World War II airplane armor example: initial analysis focused on reinforcing areas with bullet holes on returning planes. Statistician Abraham Wald famously asked why there were no bullet holes in critical areas (engines, pilot cabin)—because planes hit there never returned. The lesson: “Train yourself to look first and foremost for what is missing and reinforce those areas by asking better questions.”

Practical Application: Rectifying Bias in AI Generation

Nudelman shows that designers can actively introduce missing diversity by slightly tweaking AI queries (e.g., “black trans biologist,” “Indian woman basketball player,” “depressed older Asian man”). He emphasizes that it takes only “a tiny bit of awareness and care.” He quotes Joy Buolamwini: “Whether AI will help us reach our aspirations, or reinforce unjust inequalities, is ultimately up to us.”

Chapter 22: AI Ethics – A Multifaceted Discussion

This chapter, co-authored with Daria Kempka, delves into the complex and multifaceted topics of AI ethics, human creativity, societal well-being, and environmental impacts. It presents multiple perspectives from UX luminaries, acknowledging that not all views may align, reflecting the complexity of the subject.

AI Humanifesto

Paul Bryan’s AI Humanifesto is a guiding framework for pro-human AI product design, balancing technology’s power with human creativity, societal well-being, and environmental sustainability. It has five core concepts:

  • Control (Governance, Accountability): Ensures transparency and human oversight over algorithms, data, decision-making, and security. It emphasizes user autonomy (modifying/opting out of AI), transparency of algorithms, and user control over data (storage, processing, sharing, deletion, consent).
  • Trust (Transparency, Reliability, Explainability): Built through consistency, transparency, and responsible data handling. This includes reliability, ownership of mistakes (acknowledging errors), data responsibility (clear explanations of data usage), collaboration (feedback mechanisms), and accountability (addressing grievances).
  • Diversity (Security, Privacy, Robustness): Fosters varied, inclusive experiences by ensuring AI is trained on diverse datasets and adaptable to a wide range of users. It means avoiding biases, actively addressing representation gaps (e.g., marginalized groups), evolving with societal understanding of diversity, breaking barriers (literacy, age, socioeconomic status), and cultural sensitivity.
  • Safety (Inclusion, Bias, Equity, Fairness): Protects users from physical, emotional, and environmental harm. It includes privacy and data security, consent and education (clear explanations of risks/benefits), prioritizing physical and emotional well-being (avoiding addiction, isolation), built-in risk mitigation, and balancing innovation with safety.
  • Balance (Sustainability, Harmony, Well-Being, Human Empowerment): Achieves harmony between competing forces in AI development. It emphasizes human empowerment (AI augmenting human potential), human vs. AI contribution (clarity on AI-generated vs. human input), efficiency vs. creativity (optimizing productivity while allowing for imagination), environmental impact (minimizing energy use, transparent communication of costs), and supporting social/interpersonal relationships.

Practical Ethics for AI Product Designers

Daria Kempka provides hands-on approaches for designers:

  • Understand That Incentives Rule Human Behavior: Businesses prioritize profit. Designers must understand the incentives driving stakeholders and align ethical considerations with them.
  • Put Your Ethics into Action by Keeping the Reality of Incentives in Mind: Design interfaces to show how AI decisions are made and allow user feedback, advocating for ethical choices by linking them to profit or brand reputation.
  • Continuously Test for Ethical Impacts: Conduct user testing with diverse groups to identify unintended consequences and potential misuse. Assess ethical implications in the Value Matrix. Stress-test conversational agents with unsolvable problems. Use other AI models to audit fairness and bias.
  • Consider the Environmental Impacts: AI consumes significant energy (e.g., a 100-word ChatGPT email equals one bottle of water). Designers should minimize AI calls and question whether AI is needed at all.

Human-Centered AI: Designing the Future of Intelligence

Ranjeet Tayi discusses the pitfalls of modern AI (unreliable data, algorithmic biases, trust deficits, complexity over usability) leading to 85% project failure. He argues that AI + Data + Design = Transformative Value by ensuring high-quality data management, cross-disciplinary collaboration, and human-centered design. He suggests upskilling everyone in AI (democratizing AI knowledge, data literacy, ethical AI), envisioning AI’s transformative potential through design, and showcasing innovations with compelling demos. He emphasizes that AI is not here to replace us but to collaborate with us, enhancing creativity and productivity.

Designing AI: Beyond the Interface

Christopher Noessel argues that AI design must consider larger systems: human and group psychology, corporate and governmental tendencies.

  • Human Psychology: He highlights the “halo effect of neutrality” (overreliance on AI), under-reliance (rejecting valid AI solutions), and deskilling (losing human capabilities). Designers must mitigate negative impacts through thoughtful design that clarifies AI’s limitations and doesn’t engineer unearned trust.
  • Bullshit: AI makes generating large quantities of “bullshit” (content where the speaker doesn’t care about truth, just effect) trivial. Designers must combat this.
  • Intricate Networks: AI interactions involve multiple users and systems working toward shared goals, requiring sophisticated design for inter-system communication and conflict resolution.
  • Expanded Stakeholders: Design affects groups, organizations, nations, and the ecosphere (economic equality, environmental sustainability, social connections). Designers must understand these power shifts and make interventions.
    Noessel’s call to action: Learn technical details but regularly step back to consider broader perspectives. Resist directives to replace humans with AI, instead counterproposing human augmentation. Designers are the “first best line of defense” and must be perpetual learners, ethical guardians, and champions of human potential.

Oh, Egads! Preserving Your Creative Voice in the Age of AI

Casey Hudetz addresses O-EGADS (Overly Excessive Generative AI Dependency Syndrome)—the phenomenon of AI encroaching on the creative process, risking dulling craft and eroding personal satisfaction. He offers three strategies to avoid this trap:

  1. Create First, Compute Later: The Analog Advantage: Start with messy, rough drafts by hand (notebook and pen) to enter a flow state and prevent AI from intimidating or over-polishing ideas. Engage AI only after the core concept is solid.
  2. Let AI Be Your Muse, Not the Artist: Use AI for Ideas, Not Decisions: AI excels at ideation (e.g., generating 122 alternative uses for a toothbrush compared to human’s 5-10). Use it to jumpstart the creative engine, but humans must retain the “taste, intuition, and confidence” to choose, edit, and refine ideas, including “killing their darlings.”
  3. Reflect and Refine: Audit Your Creative Process: Apply Cal Newport’s “Craftsman Approach” to intentional technology adoption. Conduct an “AI flow audit” after projects, asking where AI boosted creativity, created friction, was effective, or hindered the process. This transforms AI from a distraction to an intentional collaborator.

Chapter 23: UX Is Dead. Long Live UX for AI! – A Call to Action for the Future

Nudelman concludes the book with a powerful message about the transformative opportunity AI presents for the UX profession, urging designers to embrace change, shed “elitism,” and become “ambassadors of innovation.”

Embracing the AI Revolution

Nudelman believes AI is a “tremendous opportunity” that will fundamentally rewrite how everything is done, moving 10 times faster than previous digital revolutions. UX designers willing to embrace this will find remarkable opportunities for growth and contribution.

The Choice to Adapt

For designers who are “requirements-driven” and only pick up a mouse with a Jira ticket, Nudelman delivers “bad news”: AI will soon create fully coded simple pages using atomic React components, rendering such roles obsolete. He emphasizes that “staying on the rollercoaster is optional” and there is no shame in quitting, only in creating mediocre products.

The End of “UX Elitism”

Nudelman declares “UX elitism” or “White-tower-ism” is over. Designers who ignore real-world constraints, produce “bullshit designs” that don’t match patterns, or create costly, valueless designs will struggle. Instead, those who “partner with or embrace project management,” understanding and driving deadlines through “deadline-aware design” (e.g., AI-generated detailed mockups and working front-end code), will gain prominence. UX designers will increasingly manage projects and timelines.

Designers as “Ambassadors of Innovation”

The future demands intensely practical, visionary design skills. Successful designers will combine a practical understanding of technology, sales, marketing, and product management (e.g., product-market fit) with the ability to “imagine novel, impactful ways of interacting with and deriving value and pleasure from technological advancements.” They will become “solution architects,” selling solutions not yet built, and “ambassadors of innovation” introducing technological advancements into daily life. This includes focusing on the ethics of technology, navigating misinformation, deep fakes, and moral biases.

The Enduring Demand for Core Skills

The “new normal” “4-in-a-box” model (PM, UX, dev, data scientists/AI specialists) will increasingly rely on “knowledge leaders” who can create plans, achieve consensus, and execute delivery. Core skills like consensus building, reconciling opposing viewpoints, orchestrating research insights, and making people feel good while collaborating will become more prominent, as AI augments specialization. Understanding technology and leveraging it for business and humanitarian needs will be key.

Combining Low-Fi UX Tools and Sophisticated AI Models

Nudelman reiterates that the book provides a complete set of low-fi UX tools (like sticky notes for prototyping) that, while simple, are practical for delivering human-focused AI products. He urges daily practice with sophisticated AI tools like ChatGPT and Claude, building custom models, and deeply understanding AI machinery.

AI Is a “Wicked Problem”

AI is a “wicked problem”—highly complex, wide-ranging, without definitive formulation or set solution, and often generating new problems. Nudelman’s framework equips designers with skills (lightweight user research, Value Matrix, Digital Twin, RITE, AI ethics) to deal with this massive change and surface unintended consequences. He emphasizes that UXers must be involved in “keeping tabs on the wicked problems.”

AI Is Just Too Important to Be Left to Data Scientists

Quoting Edward O. Wilson, “We have Paleolithic emotions, medieval institutions and godlike technology.” Nudelman’s core message is that “AI is just too important to be left to the data scientists and business people.” UX must be involved throughout the entire process—from formative research and use case evaluation to Value Matrix, Digital Twin, and ethics/bias discussions—to ensure AI operates beneficially for humans, society, and the planet, or at least to limit its harm.

The Best AI Is Augmented Intelligence

The ultimate goal is augmented intelligence: letting machines handle what they do best (number crunching, pattern recognition) and letting humans do what they do best (empathy, creativity, joy). “Achieving this ideal of augmented intelligence, the intersection of humans and machines, is where UX really shines.” Nudelman concludes with a hopeful message, encouraging designers to remain in the industry and contribute to bringing amazing things into the world.

Key Takeaways: What You Need to Remember

Core Insights from UX For AI

  • All AI is inherently biased; actively look for what’s missing in AI-generated outputs and deliberately introduce diversity through query refinement.
  • AI accuracy is irrelevant in the real world; focus on the Value Matrix by quantifying the monetary and human cost/benefit of true positives, true negatives, false positives, and false negatives.
  • The right use case is paramount; never try to replace an expert directly with AI if the AI solution costs more or lacks critical human context.
  • UX processes for AI must be a continuous, rapid iteration of UI, AI model, and data, with frequent customer feedback at every stage.
  • Storyboarding is essential for framing AI problems, making inconsistencies and absurdities immediately apparent before significant investment.
  • Digital Twin modeling is crucial for understanding an AI system’s inputs, outputs, and ethical implications, especially when done collaboratively by a cross-functional team.
  • SaaS Copilots require stateful design and fine-tuned AI models with real-time plug-ins to provide relevant, in-context assistance.
  • Reporting is a powerful LLM use case, with tools like Zoom AI Companion (automated summaries, UI modality switch) and Microsoft Security Copilot (executive summaries, human-curated pinboards) demonstrating AI’s value.
  • LLM design patterns like Restating, Auto-Complete, Talk-Back, Initial Suggestions, Next Steps, Regen Tweaks, and Guardrails are critical for enhancing human-AI interaction and managing output.
  • LLMs are revolutionizing search UIs, enabling effective fuzzy query resolution and transforming conventional Amazon/Google search into personalized, contextual experiences.
  • Dynamic dashboards and DOI (Degree of Interest) algorithms shape user experience and revenue; understanding their biases and ethical implications is crucial.
  • Modern Information Architecture for AI-first applications transcends chat-only interfaces, creating interconnected, individualized experiences (e.g., AI-first Amazon.com with contextual summaries and custom imagery).
  • Forecasting with AI goes beyond simple linear regression to nonlinear relationships and complex seasonality, requiring UX to ensure models reflect physical reality and account for the different costs of overshooting or undershooting predictions.
  • Anomaly detection is vital for critical issues, fraud, and early warnings, with UX needing to design for point, change point, contextual, and curve shape anomalies, considering static vs. dynamic thresholds and human oversight.
  • AI agents represent the future of human-AI collaboration, requiring flexible UIs, new controls, and a shift to “hiring” AI to perform tasks, demanding continuous learning and careful ethical oversight.
  • Many routine UX research tasks will be automated or radically augmented by AI, while core human skills like empathy, critical thinking, workshop facilitation, formative research, and vision prototyping will become increasingly valuable.
  • RITE (Rapid Iterative Testing and Evaluation) is the cornerstone of AI research, promoting rapid iteration with rough prototypes and continuous team collaboration, moving designs directly from feedback to functional code with AI assistance.
  • All AI is biased due to training data and algorithms; UX professionals must actively recognize, test for, and mitigate these biases to ensure fairness and inclusivity.
  • AI ethics is a multidisciplinary design challenge that requires balancing innovation with human control, trust, diversity, safety, and environmental sustainability.
  • AI is a “wicked problem” without easy solutions; UX must engage deeply to navigate its complexities and unintended consequences.
  • The best AI is augmented intelligence, leveraging machines for computation and humans for empathy and creativity.

Immediate Actions to Take Today

  • Start using low-fidelity tools (pencil, sticky notes) for all your design exercises and brainstorming, embracing their flexibility.
  • Practice the bookending method to quickly generate multiple novel design ideas for your AI-driven projects.
  • Conduct a mini RITE study with a simple prototype and a coworker or potential customer, focusing on immediate feedback and iteration.
  • Begin conversations with data scientists and engineers on your team, using the concepts of digital twins and Value Matrix to understand AI capabilities and constraints.
  • Identify a potential use case for AI in your current product that solves a real customer problem without directly replacing an existing human expert.
  • Analyze your current product’s search UI and brainstorm how LLMs could improve fuzzy queries.
  • Consciously look for bias in any AI-generated content you encounter daily, and think about how you could re-prompt to introduce diversity.
  • Reflect on the ethical implications of AI features you interact with or design, considering potential harms and benefits.

Questions for Personal Application

  • How can I integrate the “four-in-a-box” collaboration model (PM, UX, Dev, Data Scientist/AI Specialist) more effectively into my current projects?
  • What specific “robot monkey work” in my design process can I begin to automate or outsource to AI today?
  • Am I practicing “Create First, Compute Later” to preserve my creative voice, or am I falling into the O-EGADS trap?
  • What are the explicit and implicit incentives driving my current project, and how can I align ethical AI design with these incentives?
  • How can I advocate for user research with real humans and push back against the use of “synthetic users” in my organization?
  • What is one “uncomfortable question” I can ask about a current AI project to uncover potential biases or misaligned value propositions?
  • How can I apply the concepts of dynamic thresholds or seasonality to improve anomaly detection in a system I work with?
  • In what ways can I become an “ambassador of innovation” within my company, introducing new AI possibilities in a practical, value-driven way?
HowToes Avatar

Published by

Leave a Reply

Recent posts

View all posts →

Discover more from HowToes

Subscribe now to keep reading and get access to the full archive.

Continue reading

Join thousands of product leaders and innovators.

Build products users rave about. Receive concise summaries and actionable insights distilled from 200+ top books on product development, innovation, and leadership.

No thanks, I'll keep reading