
Unlocking Everyday Information Architecture: A Guide to Intentional Organization
Lisa Maria Martin’s Everyday Information Architecture offers a refreshingly practical and deeply ethical exploration of how we organize information on the web. This book demystifies information architecture (IA), showing that it’s not just a technical discipline for specialists, but a fundamental design decision that impacts every user. Martin argues that our choices in structuring, labeling, and presenting information are never neutral; they actively shape how people find, understand, and use digital content, ultimately affecting their lives for better or worse. Through clear explanations, relatable examples, and actionable advice, Martin empowers designers, developers, copywriters, and strategists to make more thoughtful, communicative, and inclusive organizational choices. This summary will comprehensively break down every important idea, example, and insight, ensuring you grasp the full wisdom of the book.
Introduction
The introduction sets a powerful and sobering tone, immediately underscoring the profound impact of information architecture. Martin begins with the chilling historical example of Walter Plecker, a registrar in Virginia in the 1920s who used his bureaucratic power to relabel racial categories to enforce white supremacy and prevent interracial marriage. His seemingly simple act of changing a label—a design decision—had devastating, dehumanizing consequences, obliterating documented identities and causing generations of harm. This stark example drives home Martin’s core argument: our work is never neutral. As designers of information, we control how people find, understand, and use knowledge, making our choices immensely powerful. We must be very, very careful.
Martin highlights the responsibility of all web workers, emphasizing Richard Saul Wurman’s axiom that “The creative organization of information creates new information.” When we structure, order, display, label, and connect information, we alter its perception. Failing to recognize this power risks building sites that are unclear, unusable, and exclusive, potentially alienating or harming users in an already mandatory and often hostile internet. The book’s goal is to equip readers—whether they identify as information architects or not—with the principles and practices to craft more thoughtful information spaces. The following chapters will explore organizational frameworks, content analysis, categories, labels, site structure, navigation, wayfinding, tags, and taxonomies, all with the overarching purpose of helping people find, understand, and use information effectively.
Chapter 1: Systems of Organization
This chapter delves into the fundamental concept that when we organize information, we change it. The author stresses that the order, context, and presentation of information inherently alter its meaning. The goal of good organization is to enhance understanding, making complex subjects easier for humans to grasp and remember. This understanding is critical for effective design decisions, preventing outcomes where information is structured but doesn’t serve the user, much like books shelved with their spines inward.
To combat the subjective and arbitrary feeling of organizing web content, Martin introduces LATCH, Richard Saul Wurman’s powerful framework for structuring information. Wurman proposed that there are only five possible methods for organizing anything and everything:
- Location: This method organizes information based on physical or conceptual space. Examples include atlases, city maps, bus routes, Craigslist, Yelp, and even IKEA’s website, which first asks visitors to select their country (geographic location) and then organizes products by room (conceptual location like “bedroom” or “kitchen”). Martin cautions that geographical information isn’t always best organized this way; a list of state capitals, for instance, might be better alphabetized.
- Alphabet: Organizing by alphabet is rarely used at high levels of web navigation but is reserved for information retrieval. It provides the quickest path to a single, known item within a very long list, such as dictionaries, book indexes, phone contact lists, or Zappos’s alphabetical brand list. This method is primarily for research, not discovery, and should only be used when users already know exactly what they’re looking for.
- Time: This method organizes content chronologically, based on time-based factors. Examples include calendars, horoscopes, meeting agendas, and email inboxes (sorted by recency). Social media feeds are often time-based, though platforms like Facebook may default to algorithmic “Top Stories” displays, which can disrupt user expectations because time adds crucial context and meaning to information, especially news.
- Category: This involves organizing information around topics, themes, or predetermined groupings. Grocery stores are a prime example, grouping produce, dairy, and cereal. On the web, most website navigation is categorically organized, using labels like “About Us” or “Our Products.” This method is flexible and robust, excellent for breaking down large datasets into usable, findable pieces, making it perfect for discovery. Netflix, with its hyper-specific genre labels like “Oddballs & Outcasts,” illustrates how categories can help users find new programs tailored to their tastes, though overly niche categories can sometimes hinder broader discovery.
- Hierarchy: This method arranges items according to assigned value, from least to most or most to least important. The value can be inherent, as with yarn weights (lace-weight to super bulky), or imposed by an algorithm or analytics, such as a “Most Popular” content list on a social platform. Instagram’s feed, which displays posts based on a hidden, algorithmic hierarchy rather than chronologically or by user preference, is cited as a chaotic example. Martin clarifies that this is different from page hierarchy (sitemaps) or information hierarchy (relative importance).
While LATCH is a valuable starting point, Martin acknowledges it’s an imperfect system with subjectivity and contradictions. Katherine Bertolucci, for example, points out that LATCH itself is mnemonically organized (a method Wurman didn’t include) and that Category is a grouping method, not an ordering method. Bertolucci also suggests organization by shape as another missing method. Despite these flaws, LATCH is useful because it provides a springboard for exploring different approaches, reminding us that there is a finite number of ways to organize infinite information. The key takeaway is that organization is never arbitrary; every decision alters information perception, and creating effective structures requires understanding the content itself.
Chapter 2: Content Analysis
This chapter emphasizes the crucial, yet often overlooked, initial step in any web project: understanding the content of a site. Martin argues that failing to evaluate content early leads to significant project problems, such as poor spacing, overflowing headlines, and out-of-sync templates, all stemming from design decisions based on assumptions about content. Someone on the project needs to become the content advocate, understanding what exists and its meaning, not just memorizing every page.
The solution presented is conducting content audits, which impose order on chaos by defining scope, targeting vulnerabilities, and identifying strengths. Audits provide the data needed for sound design decisions. Martin acknowledges that many might resist auditing “old content” if stakeholders propose a fresh start, but she insists it’s necessary for two key reasons:
- Content is rarely entirely scrapped: Revising existing content is often easier and more cost-effective than creating new content from scratch, and current content holds equity (users, search engines, and owners are familiar with it).
- Understanding the past informs the future: To build a new and improved system, one must understand the strengths and weaknesses of what came before. Jorge Arango is quoted on the need to observe the functioning system to develop a useful mental model.
Martin clarifies important audit definitions and distinctions:
- Audits versus Inventories: An audit is the process (the action of reviewing), while an inventory is the product (the resulting artifact, typically a spreadsheet). This distinction helps teams understand the grind versus the output.
- Qualitative versus Quantitative: This is a false dichotomy. Useful audits mix qualitative (readability, voice, brand match) and quantitative (page count, image count, word count) factors. Numbers need a story, and quality needs measurements.
- Automated versus Manual: Relying solely on automated audits (crawls) is insufficient; data only tells half the story. Human oversight is essential to provide context and make sense of the numbers. The best audits use a blend of methodologies.
The most important principle for auditing is that they should be purpose-driven. Knowing what you want to learn, who will use the results, and what resources are available dictates the type of audit. Martin often layers multiple interconnected audits throughout a project, starting with scoping.
Scoping the Site
A high-level scoping audit early in a project helps define the content and structural needs of a website. This helps arrange resources, set appropriate timelines, and communicate constraints to stakeholders. Martin outlines five key questions for auditing scope:
- How much content is on the site? This goes beyond just page count; it includes awareness of other digital properties (social media, third-party apps, external blogs) and expected future changes to content locations and formats. She notes that the size difference between, say, 5,000 and 10,000 pages won’t be as challenging as the kind of content.
- What kind of content is it? This involves identifying content types and styles to find patterns: evergreen vs. dynamic, marketing vs. storytelling, research-based (white papers), documentation/support, community areas (forums), and rich media (images, videos). For example, many images might suggest challenges with chronological browsing or image performance.
- How is the site structured? This involves observing navigation, URL patterns, breadcrumbs, duplication, missing menu items, page relationships, and logical content flow. Inconsistencies suggest a need for heavy IA work.
- How effective is the content? This is a qualitative assessment of content quality (readability, clarity, helpfulness, typos, brand voice) across a representative sample of high-profile, stakeholder-invested, and low-level pages. Quality issues often point to deeper content strategy needs.
- How is the content managed? This involves understanding workflows for writing, editing, publishing, and governance. Common pain points include lack of dedicated authors, insufficient time, CMS permission issues, lack of guidelines or training, and inconsistent editorial oversight. Martin stresses that a beautiful new site will fall into disrepair if content management isn’t addressed.
Martin warns against dismissing content challenges with common excuses like “The site’s not that big,” “The content’s not that complicated,” or “The client / the other team is handling it.” Ignoring content issues benefits no one and often leads to an incomplete understanding of the problem space. Instead, planning for content and structural needs enables better collaboration and user experience.
Working with Automated Data
After high-level scoping, a more detailed audit often involves an automated crawl paired with manual review. The automated data provides quantitative support, while human observation adds context. The automated crawl is crucial for building an exhaustive content inventory—a spreadsheet listing every page. Martin notes various tools can collect data like links per page, word counts, heading lengths, readability, and Google Analytics data.
Making automated data usable requires finessing:
- Style the sheet: Apply color, freeze rows/columns, adjust cell sizes for readability.
- Delete or hide unnecessary columns: Remove empty data or irrelevant metrics.
- Remove bad and duplicate entries: Filter out redirects, 404s, JavaScript, code, files, and duplicate URLs.
- Add section data manually: Categorize pages by major site sections, often using URL folder paths, to understand content distribution.
Parsing the data involves focusing on criteria like:
- Number of pages: Understand total site size and content distribution across sections, revealing true organizational priorities (e.g., if a “Support” section has low page numbers despite stated value).
- Number of images and videos: Reveals distribution and provides crucial alt text data for accessibility.
- Word count: Contextualize high or low counts; they’re not inherently bad, but outliers may indicate issues.
- Comparative readability: While imperfect, trends in readability scores across sections can suggest starting points for manual scrutiny.
- Number of links: Inlinks, outlinks, and external outlinks can reveal site connections or accidental diversions.
Martin emphasizes that the combination of automated data and manual observations reveals insights critical for design strategy and content approach. The chapter concludes by reinforcing that deep content analysis grounds future design and development decisions in truth, respecting content and user contexts, and enabling the building of effective communication structures.
Chapter 3: Categories and Labels
This chapter pivots to the crucial, yet often overlooked, foundational steps of conceptual organization: categorizing content and crafting effective labels. Martin draws a parallel to “tree blindness,” arguing that we often suffer from a similar “structural blindness” on the web, taking sitemaps, sections, and labels for granted rather than treating them as worthy of deeper intention. Categorizing content is the basis for your sitemap, codifying the site’s structure.
Criteria Matching
Categorizing seems simple—just group like items together. However, Martin immediately points out that defining “like” is a complex process fraught with subjectivity, bias, politics, errors, and cultural/historical memory. It’s a process of criteria matching, requiring answers to two questions:
- What are the criteria for the category? (Macro view)
- Does the content match the criteria? (Micro view)
This process is fluid, moving back and forth between defining and vetting criteria. Using the example of finding peanut butter in a grocery store, Martin illustrates how different stores might categorize it differently (condiments, breakfast aisle), and users need to perceive the store’s decision-making process to find what they want. Similarly, website users need to understand the site’s categorical logic.
Categorical Considerations
Martin identifies four key factors that should shape the criteria for organizing digital content, emphasizing that we need to be deliberate and transparent in our categorization:
- The needs of the users: Categories should be informed by who is on the site, what tasks they are trying to accomplish, and what they care about. Martin uses the Posse Foundation (a nonprofit sending students to college) as a key example. An initial, seemingly sensible approach was to categorize content by audience (Alumni, Universities). However, this approach has problems:
- Users visit to accomplish a specific task, not to select an identity.
- Users are not confident in self-categorization; they may belong to multiple or none of the visible categories.
- Content overlaps due to audience and task commonalities, leading to duplication and confusion.
Martin advocates for organizing around user tasks and actions (e.g., verbs like “nominate,” “recruit,” “support”) rather than audience personas, aligning with Gerry McGovern’s idea of organizing around what the customer wants to do.
- The goals of the business: User needs must be balanced with business goals. For the Posse Foundation, a primary goal was to better share their mission and demonstrate impact—to tell their story. The old site’s audience-based categories (Our Scholars, Our Alumni) failed to convey this narrative. Martin warns against org chart navigation, which reflects internal business structure rather than user needs, pushing against Conway’s Law.
- The current state of the content: Insights from content audits are crucial here. The current content is a yardstick for predicting future content, contains brand equity, and is “all you’ve got.” Martin recreates the Posse Foundation’s sitemap using color-coded cards, revealing that while content was equally distributed across categories, it wasn’t equally important. The most critical information for both users and stakeholders revolved around the scholarship lifecycle, which was buried. This insight showed potential to elevate that content.
- The strategic future of the content: This factor has two components:
- The strategy driving the design: For Posse, the “tell-the-story” strategy meant focusing top-level categories on the scholarship process: Shaping the Future, Recruiting Students, Supporting Scholars, Connecting Alumni. These parallel labels, using gerunds, underscore the narrative flow and emphasize user action, without excluding audiences. A fifth category, Partner with Us, was visually distinct, signaling a shift in purpose.
- The resources for content creation and maintenance: Categories must be tempered by available time, funding, and staffing. A shift in categories might require rewriting, metadata updates, or archiving content, all of which must be feasible for the client team post-launch. Proposing content that overburdens the content team is “shortsighted at best (and hostile at worst).”
Martin concludes that by critically considering user needs, business goals, and the current and future states of content, she could recommend new, more successful categories for the Posse Foundation, leading to a new sitemap and navigation systems.
Crafting Labels
The language used to label categories is as important as the boundaries that define them. Labels articulate choices and make boundaries visible. Martin states that labels and categories develop in tandem; changing one often impacts the other. Using her personal Pinterest Desserts board as an example, she describes the iterative process of creating labels like “Cookies” which then needed to become “Cookies & Bars” to accommodate “blondies” and “lemon squares” as the criteria evolved. This iterative testing and tweaking of labels and criteria ensures they work together.
For public-facing content, clarity for users is paramount. Martin prioritizes four qualities for labels:
- Clarity: Use straightforward, familiar language free of confusion and ambiguity. Mirror user language from research. Avoid corporate lingo or experimental affordances (e.g., Twitter’s “Moments”). Labels can still convey brand and values while being clear.
- Specificity: Avoid miscellaneous or catch-all categories with generalized labels (e.g., “Other”). These “junk drawers” clutter up and get ignored, often stemming from larger classification problems or labels that aren’t specific enough.
- Inclusivity: Be purposeful about inclusivity by getting input from diverse groups of users and colleagues. Ask how labels might be misconstrued or harmful, and how they can make space for more people (e.g., REI’s kids’ subcategories that deprioritize gender).
- Consistency: Use similar language, syntax, or parts of speech (all verbs, all gerunds, all nouns) for labels read in a group to speed understanding and create rhythm (e.g., mass.gov’s gerunds). However, beware that consistent language doesn’t always lead to consistent meaning. Watch for inconsistent use of first- and second-person possessive pronouns (“My Account” vs. “About Us”); Erika Hall’s rule is “ours” for company things, “yours” for user things, and no possessive for general experience.
The chapter wraps by reiterating that categorization and labeling are an alchemical process resulting in clear, user-centered, strategically defined structures that form the site’s conceptual backbone. However, when done thoughtlessly or in bad faith (like Walter Plecker), they can be destructive. IA helps reduce arbitrariness through research, inclusivity, and respect for users, building systems that work for real people.
Chapter 4: Site Structure
This chapter builds on the previous discussion of categorization, focusing on how to document and communicate the overarching structural system of a website through sitemaps. Martin begins with the example of Project 100, a website showcasing progressive women running for office. Cofounder Eduardo Ortiz intentionally designed the candidate listings without pagination to avoid lesser-known candidates disappearing on later pages. This “idealistic approach” ensured “a level playing field,” demonstrating how a well-considered design decision about structure can significantly impact discovery and understanding.
Auditing for Structure
Before rebuilding a system, one must understand the current one. This is the purpose of the structural audit, a review focused solely on a site’s menus, links, flows, and hierarchies. Its singular goal is to help build a new sitemap by tracking and recording the site’s structure as users actually experience it.
Martin details how to set up and conduct a structural audit using a spreadsheet:
- Setting up the template: She uses a color-coded outline key at the top of her audit files to track page depth and maintain orientation, especially when dealing with thousands of pages.
- Color-coding: Different colors visually denote page depth (e.g., cooler colors for deeper pages), making the spreadsheet scannable and preventing “eye glaze.”
- Special notations: She uses specific colors/styles to capture inconsistencies: on-page navigation (links not in main menus), external links (pages outside the domain), files (PDFs, Word docs that disrupt browsing), unknown hierarchy (pages that don’t seem to belong), and crosslinks (duplicates of pages canoncially in other sections).
- Outlines and page IDs: Each page gets a unique Page ID (e.g., 1.0, 1.1, 1.2.1) to associate pages with their place in the hierarchy, provide a clear identifier for communication, and be usable in other project contexts like wireframing. Pages are also indented to reinforce the numerical and color-coded hierarchy.
- Criteria and columns: Beyond Page ID and URL (used selectively), columns include:
- Menu label/link: To track mismatches between link text and page names.
- Name/headline: What the page owner calls it (e.g., H1).
- Page title: Metadata title, tracked for mismatches.
- Section: Manually noted section where the page appears.
- Notes: For specific challenges or recurring patterns (e.g., “Different template, missing subnav”). Martin provides a downloadable Excel template.
Gathering Data
The process of filling out the spreadsheet is tedious but essential. Martin quotes Erin Kissane’s description of “black coffee, late nights,” and listening to Katamari Damacy music. She advises:
- Use two monitors for quick switching between spreadsheet and browser.
- Record what you see: Systematically explore navigation from left to right, top to bottom, exhausting one section at a time, adjusting observations as needed.
- Be alert to inconsistencies: Don’t overlook on-page links, external links, and crosslinks, as they reveal structural insights.
- Stick to what’s structurally relevant: Don’t record every single blog post or news story; use an “x” (e.g., 2.8.x) for dynamic, repeatable content to denote “more of the same.”
- Save frequently.
Performing structural audits develops fluency in systems thinking, which is invaluable for documenting new sites.
Building Sitemaps
The structural audit informs the new system, helping avoid past weaknesses and retain strengths. The sitemap is where this new system is documented, communicating the hierarchy of pages (parent-child and sibling relationships). While sitemaps don’t capture every possible path, they are a critical baseline.
Martin outlines the value of sitemaps:
- Shared vocabulary: They codify page titles and IDs, creating common terminology for multidisciplinary teams.
- Complete inventory: They provide a full list of new site pages or content displays, useful for planning navigation menus, wireframing, and migration.
- Mapped hierarchy: Understanding page hierarchy provides a baseline for potential user paths and mapping user journeys.
She subtly pushes back against the question, “But do we really need a sitemap?” by asserting that site structure is a design choice that requires documentation to conceptualize and communicate.
Martin then details different documentation styles for sitemaps, suggesting the best choice depends on project needs:
- Box-and-arrow diagrams: Excellent for high-level structure, particularly for stakeholders who prefer visual over detailed text. Best for smaller sites or overviews of larger ones, often spanning multiple pages (e.g., top level, then subsequent pages for details). Visual elements (colors, connectors) should convey meaningful information, and a legend may be needed.
- Outlines: A more textual approach, ideal for completionist scenarios that need to show every page across multiple nested levels. While effective for comprehensive records, they can be very long and text-heavy, potentially disappointing stakeholders expecting a visual diagram.
- Spreadsheets: The most robust option for recording a wealth of data beyond just hierarchy, such as source content, new URLs, revision status, ownership, and migration deadlines. If a structural audit was conducted, this spreadsheet can serve as a direct springboard for the new sitemap, tracking content progression from old to new. It’s ideal for collaboration on wireframing and development.
Filling in the details
Regardless of format, a well-documented sitemap should include:
- Consistent page identification: Continue using Page IDs from the structural audit or start a new system.
- Differentiate between single pages and collections: Use succinct notations for dynamic content (e.g., overlapping boxes for visual, “[News articles]” for text).
- Don’t forget outliers: Include all necessary navigation structures, components, and pages, referring back to structural audit notes.
- Aim for editorial accuracy: Use preferred spellings, punctuation, and capitalization from content owners to avoid approval slowdowns.
- Add context: Provide rationale for decisions via reports, slide decks, or annotations, especially for changes that might surprise stakeholders.
Making Adjustments
Sitemaps are rarely perfect on the first try. Revisions are usually motivated by purity (balancing, clarity, consistency) or politics (stakeholder requests). When adjusting, Martin suggests manipulating a few levers:
- Change the labels: Tweak or rewrite labels to alter content scope.
- Change the categories: Reexamine content distribution, trying alternative groupings to open new perspectives.
- Change the content: Consider adding, deleting, or splitting content if it benefits the structure (and user), but don’t force content changes just to fit an ideal sitemap.
- Rethink your approach: As a last resort, start from scratch to eliminate accumulated “baggage” (e.g., internal politics) and find a new perspective.
Martin acknowledges that IA is closer to art than science, blending instinct, nerves, and data. No sitemap is free from politics or biases, and continuous iteration is expected. The sitemap is the “heart and soul” of site structure, but it’s just one artifact; ensuring usability also requires illuminating paths and helping users stay oriented.
Chapter 5: Navigation and Wayfinding
This chapter dives into the practical application of site structure, focusing on how users navigate and orient themselves within a website. Martin asserts that we have developed shared expectations about website functionality (e.g., navigation near the top, a footer, subnavigation in dropdowns). Systems that defy these conventions can hinder effective and efficient information use. She emphasizes that while tools promising “precisely right content” are seductive, findability and access start with clear, well-structured content and clear, well-structured ways to access it.
Plotting the Course
While sitemaps document the site’s skeleton, navigation structures are the starting points for user paths into the content. Navigation is more than a table of contents; it creates meaning, revealing what’s prioritized and how paths are labeled, communicating everything about the content and the organization behind it. Therefore, navigation structures demand deliberate design to provide clear, accessible, established pathways.
Martin identifies several common navigation structures:
- Main navigation (primary/global): Key navigation for most important content, often central to categorization and labeling.
- Secondary navigation: For content secondary to the core, often institutional or operational (e.g., “About Us,” “News”).
- Utility navigation: For functionality like logins, accounts, shopping carts, directories, search, and social links.
- Search: A critical, often overlooked, structural component, including the search bar and results page.
- Social navigation: Links to external social media or partner sites, important for documenting paths leading off-domain.
- Header or footer navigation: Areas at page top/bottom that can hold secondary/utility nav, or repeat main nav/surface deeper pages. Footers can expose more content without distracting from primary layout.
Martin uses the Seton Hill University website redesign as a case study. The old homepage suffered from “too much, too soon,” with a surplus of navigation structures (main nav, secondary nav, carousel, audience selector, search, event/news tabs). This choice paralysis backfired, especially for prospective students who were the target audience but saw prominent links for current students. The redesign narrowed the main navigation to just three menu items: Academics, Campus Life, and Admissions. This created a clear chronological journey for prospective students, leading to a much more effective, tailored site and significant increases in undergraduate admissions landing page sessions and new users.
This case study highlights the importance of balanced pathways:
- Too many paths create indecision and choice paralysis.
- Too few paths create dead ends, lacking calls to action or clear content visibility.
- Establish firm paths by making tough decisions about prioritization.
- Be prepared for users to carve their own paths, ensuring information structures empower exploration even when users stray from planned journeys.
- “Every page is the homepage” means clear wayfinding is needed at every level, as users may land on deep pages directly from search.
Wayfinding Signals
In any large or unfamiliar space (physical or digital), we look for signposts to orient ourselves. Online, these are information scent signals that help users build a composite sense of place: “Where are we? Where have we come from? Where are we going?” These signals are often unconscious but critical for navigation.
Martin highlights three visible wayfinding signals:
- Breadcrumbs: These are immediate, textual, and visual cues that help users orient themselves. They spell out an established path, providing context for a page’s location in the larger system (e.g., Serious Eats showing a recipe’s organization). They are useful even in intuitive designs because users are often distracted and miss signals. Google also “likes” breadcrumbs for SEO.
- URL structures: More than just addresses, URLs offer helpfully repetitive signals about a page’s relationship to the system (e.g., wiltomakesfood.com/recipes/french-macarons/ clearly indicates the page is about French Macarons and is under the “Recipes” section). Inconsistencies in URL patterns can be misleading. Beyond location, URLs also communicate security, reliability, and trustworthiness, a crucial signal in the “post-factual era” where misleading URLs contribute to misinformation (e.g., Newsweek’s SEO-driven headlines).
- Calls to action (CTAs): Every page should have a clear next step for the user. While a page might have multiple CTAs, there should always be one primary purpose and one primary action. CTAs act as a type of navigation, furthering journeys and fulfilling the page’s purpose. Pages without clear actions are “dead ends.”
Martin particularly admonishes against “quick links”—lists of links without context, often driven by political expediency rather than user need. She argues they are a “usability trap” that imply other links are “slow,” lead to user confusion, and are a symptom of poor underlying organization. Instead of quick links, address the core findability issues through better structure, clear paths, clear labels, and clear wayfinding.
Wild Goose Chases
Martin illustrates poor navigation and findability with the Harney & Sons tea website. As a casual tea drinker, she struggles with their categorical navigation (e.g., “Black Tea” leading to inconsistent subcategories like “flavored,” “blend,” “regional”). This system expects users to possess specialized knowledge (jargon) that the content owners have, rather than supporting discovery or understanding for less knowledgeable users. The breadcrumbs offer little clarity.
This leads to a discussion of information-seeking behaviors, borrowing from Donna Spencer:
- Known-item seeking: Users know exactly what they’re looking for but not where to find it (e.g., Googling a question, searching Netflix for a specific show, typing “Paris” into Harney & Sons search).
- Exploratory seeking: Users don’t have a specific intent but are browsing or conducting general research (e.g., looking for a new show, nearby restaurants).
Spencer also defines: - Re-finding: Known-item seeking for something previously found.
- “Don’t know what you need to know”: Exploratory seeking with even less initial information.
The challenge with Harney & Sons is that it assumes the user has the same tea classification knowledge as the content owners; it’s designed with full product knowledge as the starting point, not the endpoint.
Findability and SEO
The discussion naturally shifts to Search Engine Optimization (SEO). Martin expresses disdain for past “keyword stuffing” practices that prioritized search engines over human readability. Chris Corak of Onward explains that modern SEO aligns more with user needs: “Often, what the search engine wants is what people want to see, too.”
However, Martin acknowledges that shady practices (clickbait, hashtag abuse, keyword messes) persist, citing Newsweek’s insensitive SEO-driven headlines after Anthony Bourdain’s death as an example of a publication choosing traffic over “editorial morality.” She argues this binary—ethical and lose, or cheat and win—is false.
Rebekah Baggs of Onward suggests designers focus on the relationship between content’s meaning and its presentation. Visual hierarchy must support information hierarchy, as labels and markup (H1s, H2s) speak to both search robots and human understanding. Content strategist Rick Allen adds that “Hierarchy and prioritization is what makes information cohesive.” The bottom line is to skip keyword repetition and focus on the interplay of content and design for findability.
The chapter concludes by reinforcing that navigation structures, wayfinding signals, content meaning, and design all work together to help users find what they need. It’s about being critical and responsible for all the signals we send in the information space, including the taxonomy.
Chapter 6: Tags and Taxonomies
This chapter demystifies taxonomy, defining it simply as a “list of terms used to arrange web content.” Martin acknowledges that while the term can sound abstract and intimidating, it’s a concrete artifact that can be straightforward. She then provides key questions to determine if a website needs a taxonomy:
- Will multiple authors and editors manage content?
- Will portions of the publishing process be automated (e.g., content displaying in specific areas based on topic)?
- Will search functionality include filtering or faceting (user-controlled sorting)?
If the answer to any of these is yes, some taxonomic work is likely needed.
Using Taxonomies
Martin breaks down the different functions of site taxonomy:
- Controlled vocabularies: At its simplest, a taxonomy is a controlled vocabulary—a list of words with canonical spellings and punctuation to enforce editorial and experiential consistency. This ensures everyone (team, stakeholders, users) uses the same language and definitions (e.g., “Arts and Sciences” vs. “Arts & Sciences”). This overlaps with editorial style guides and contributes to a professional, reliable experience. If no style guide exists, one can be started, with processes for enforcement and future changes.
- Tagging and sorting: Taxonomies are used to identify terms (tags) that sort content on the site, enabling content to dynamically appear in designated areas. For Carnegie Mellon University’s College of Engineering, research areas were mapped to tags (e.g., “robotics”), so stories tagged accordingly would automatically populate the relevant research pages. This ensures pages are accurately populated automatically.
- The beauty and majesty of tags: Tags are both content and connection. They can be hidden backend functions or visible design elements offering topical signals and links (like social media hashtags). Martin references The Toast’s redesign, where an unwieldy folksonomy of 8,182 unique tags was streamlined. Joke tags were creatively kept as plain text fields linking to Google Search, demonstrating how funny and functional elements can coexist with proper taxonomy in the backend. The redesigned site relied heavily on tags and recirculation modules for navigation, offering endless content discovery.
- The lies and false promises of tags: Martin warns against folksonomies—free-range taxonomies created by content authors adding terms on the fly. While seemingly less work up front and offering creative flexibility, folksonomies often result in bloated, sprawling, inconsistent messes (e.g., #dogsofinstgram vs. #dogsofinstagram). This undermines findability, content measurement, and user experience. She advocates for establishing consistent tags through an enforced taxonomic system with a process for managing the tag list over time, keeping it flexible yet controlled.
- Content filtering: Taxonomies go further when users can interact with content display via faceted search or filters. This requires crystal-clear taxonomies that can be accurately applied by authors and understood by users.
- Faceted search: Martin uses Ravelry, a site for fiber artists, as an example. Its robust taxonomies for patterns and yarns (classified by craft, product type, fiber type, etc.) enable highly granular faceted search. This precision is possible because: 1) the taxonomies were designed and applied (not emerged spontaneously), and 2) Ravelry users actively apply these taxonomies when using the system, making user-generated content highly searchable and useful to the community.
- Trust and community: Ravelry’s success is tied to its community’s trust in the founders to treat their data with respect, and the founders’ trust in users to cocreate the experience. The community’s dedication led them to even update 170,000 existing patterns after a metadata revision. Martin contrasts this with systems that exploit users, emphasizing that “No amount of taxonomy can save them.”
Documenting Taxonomies
To begin taxonomy work, one simply needs to write words down. Given that most sites have multiple taxonomies (controlled vocabularies, tagging systems, faceted term lists), the best tool is a spreadsheet. Martin describes her approach:
- The first tab serves as an overview, explaining what taxonomies are, their impact on the project, and identifying different taxonomic categories with definitions and instructions.
- Subsequent tabs detail each taxonomic category, ranging from simple lists (e.g., academic departments) to complex lists with associated terms and identifiers (e.g., degree program names with program types and degree options).
The number of tabs depends on the system’s robustness. Collaboration is key, collecting feedback from designers, developers, and copywriters. Taxonomies are living documents, meant for collaboration and evolution even after launch, requiring ongoing management by content owners.
Priorities and Values
Martin concludes by asserting that taxonomies, like all categorization and labeling, convey a specific perspective and have a political impact. She uses the Gettysburg College website redesign and its specialized academic program search tool as an example. The tool matched student topics (e.g., “activism,” “outer space”) to relevant programs using a comprehensive taxonomy. This raised critical questions:
- Why would “literature” automatically mean English literature? What about African studies or Chinese language programs?
- How would “languages” prioritize English over Arabic, Greek over Spanish?
- Would “ethics” only return philosophy, political science, or religion, implying it’s irrelevant to computer science?
These questions revealed how easily unexamined assumptions could lead to blindly perpetuating racism, sexism, classism, and other systems of inequality. While the final tool’s complexity was scaled back, the taxonomy work highlighted the ease of making harmful design decisions. Martin emphasizes that designers will still make mistakes, but intentions don’t matter as much as impact. Understanding this responsibility is an ongoing process.
Conclusion
The conclusion powerfully reiterates the book’s central message: “Whatever you’re doing, it is not neutral. It is either challenging what’s going on or normalizing it.” Walter Plecker’s act of changing a label in 1924, which led to the “statistical genocide” of indigenous people in Virginia and required a federal law nearly a century later to begin repairing, serves as a stark reminder of the profound, long-lasting impact of information organization.
Martin emphasizes that information is not neutral, nor are our choices about how to present, structure, write, juxtapose, or classify it. Every design decision has an impact, and we must stand up and own that impact. She laments that sitemaps and section labels are often treated as mere technicalities, but underscores that information architecture helps us bring more care to these decisions. We have a responsibility to be deliberate about choices that help real people find, understand, and use information in the world.
The book concludes by highlighting the tangible benefits of good IA: it can help someone get a job, pay bills, connect with loved ones, or simply feel seen rather than silenced. Ultimately, “When we organize information, we change it. Let’s change it for the better.”
Key Takeaways
- Information architecture is never neutral: Every decision we make about organizing, structuring, and labeling information has a profound impact, for better or worse, on how users find, understand, and use content.
- Content analysis is foundational: Before any design or structural work, deeply understanding the existing content through purpose-driven audits is essential. This includes knowing content types, quantity, current structure, effectiveness, and management workflows.
- Categorization and labeling are critical design decisions: These are not arbitrary acts but should be informed by user needs, business goals, and the current and future state of content. Labels must prioritize clarity, specificity, inclusivity, and consistency, avoiding “junk drawer” categories and hidden biases.
- Sitemaps document intent: Sitemaps are vital artifacts for communicating the intended hierarchy of pages, fostering shared vocabulary, providing a complete content inventory, and mapping user journeys. They are living documents that evolve with feedback and adjustments.
- Navigation and wayfinding empower users: Beyond primary menus, all site elements (breadcrumbs, URLs, calls to action) serve as crucial wayfinding signals. Designers must balance providing clear, firm paths with allowing users to carve their own, always assuming “every page is the homepage.”
- Taxonomies bring order and power: From simple controlled vocabularies to complex faceted search systems, taxonomies are lists of terms that enable content sorting, filtering, and dynamic display. Deliberately designed taxonomies (as opposed to chaotic folksonomies) are crucial for findability and for building trust within online communities.
- Impact over intention: Regardless of our good intentions, the impact of our design decisions is what truly matters. We must actively interrogate our biases and consider potential negative outcomes to build systems that work for diverse groups of people.
Next Actions:
- Conduct a mini-audit of a site you work on: Pick a small section and try to apply Martin’s structural audit template. Note inconsistencies and content types.
- Analyze a key navigation menu: Evaluate its labels for clarity, specificity, inclusivity, and consistency, considering the user tasks it supports.
- Reflect on a “quick links” section: If your site has one, identify its purpose and discuss with stakeholders if it can be better integrated into the main structure.
- Identify a potential taxonomy: Think about a type of content on your site that could benefit from a controlled vocabulary or tagging system.
Reflection Prompts:
- How might your current site’s organizational choices unknowingly perpetuate biases or exclude certain users?
- In what ways could your team benefit from a deeper, shared understanding of your content’s purpose and underlying structure?
- What “invisible” design decisions (like URL structures or heading tags) on your site are sending unintended signals to users or search engines?





Leave a Reply