- Published on
AI Product Design Complete Guide: Generative UI, Trust, Feedback, Streaming, Agent UX, Korean Culture Specialization (2025)
- Authors

- Name
- Youngju Kim
- @fjvbn20031
Season 4 Ep 12 — If everything through Ep 11 was about "engineering reaching stability," Ep 12 is about being loved by users. No matter how good the model is, the product fails if UX is weak.
- Prologue — "Good AI product ≠ Good UX"
- Chapter 1 · Deterministic UI vs Generative UI
- Chapter 2 · Designing for Trust
- Chapter 3 · Feedback UX
- Chapter 4 · Streaming, Latency, and Progress UX
- Chapter 5 · Agent UX
- Chapter 6 · Voice UX (Extending Ep 9)
- Chapter 7 · Data and Learning UX
- Chapter 8 · Onboarding and First Impression
- Chapter 9 · Accessibility and Inclusion
- Chapter 10 · Ethics and Responsibility
- Chapter 11 · Korean Language and Culture Specialization
- Chapter 12 · Five Real-World Cases
- Chapter 13 · Ten Anti-Patterns
- 13.1 A lone prompt box
- 13.2 No progress indicator
- 13.3 Responses with no citations
- 13.4 Full confidence on hallucinated answers
- 13.5 Feedback collected, no sign of being used
- 13.6 Swallowing errors silently
- 13.7 Automatic actions with no approval gate
- 13.8 Accessibility deprioritized
- 13.9 Missing AI disclosure
- 13.10 Global tone ignoring Korean culture
- Chapter 14 · Checklist — Twelve Items Before Launching an AI Product
- Chapter 15 · Next — Season 4 Ep 13 (Finale): "Business Models in the Generative AI Era"
Prologue — "Good AI product ≠ Good UX"
Many 2023-era AI products were just "ChatGPT wrappers": a text input, a response panel, and thumbs up/down. As of 2025, most products with only that format have either failed or survived only in narrow niches.
Why:
- The ChatGPT mothership keeps getting stronger — simple wrappers have no differentiation
- Users have started recognizing AI's weaknesses (hallucination, latency, sudden refusals) — trust design is required
- You cannot demand "write good prompts" from non-technical users
So AI product design in 2025 is:
"UX design that builds trust within constraints."
This essay distills the concrete patterns.
Chapter 1 · Deterministic UI vs Generative UI
1.1 Deterministic UI
- Buttons, forms, tabs, menus — results are predictable
- Gentle learning curve, easy error correction
- 80 years of HCI optimization behind it
1.2 Generative UI
- AI generates buttons, cards, charts, and forms on the fly
- Works together with Tailwind / design-system tokens
- UI morphs by context — extremely powerful but hard to predict
1.3 Hybrid patterns
- Deterministic skeleton, generative content (lists, summaries, citations)
- Render generated output as deterministic components (JSON to Card, Markdown to safe HTML)
- Actionable Chip: the model proposes, the user taps, and a deterministic action fires
1.4 Examples
- Notion AI's slash commands: deterministic + generative
- Perplexity: generated summary and citations inside a deterministic layout
- Cursor: deterministic IDE frame + inline generative code suggestions
"The freer the generation, the more deterministic guardrails you need."
Chapter 2 · Designing for Trust
2.1 Why trust is central
- AI output is probabilistic. Every time, users silently ask "is this really right?"
- One big error poisons even correct answers afterward
- Trust is not accuracy. Trust is users being able to predict the AI's limits.
2.2 Seven patterns
- Citation: show source documents, pages, and sections
- Uncertainty display: language like "85% confident" or "insufficient info"
- Editable: edit, delete, or regenerate a response in place
- Visual distinction of origin: AI-generated areas marked with icon or background
- Show work: expandable reasoning steps and search results
- Freshness labeling: "Latest data: 2025-04-10"
- Polite refusals: "I cannot help with that" + suggested alternative
2.3 What breaks trust
- Confident answers with no sources
- Hallucinations wrapped in "plausible" prose
- Silent failure (no error message)
- Excessive generative UI that reshapes the interface every time
Chapter 3 · Feedback UX
3.1 Collection
- Thumbs up/down: instant feedback
- Multiple-choice reasons (checkboxes): "inaccurate", "incomplete", "unsafe", "style mismatch"
- Free text: rare but deeply insightful
- The edited response itself is feedback (regenerate, edit)
3.2 Utilization
- Downvote cases to regression-eval candidates
- Upvote cases to preference training data (Ep 4 DPO)
- Edit diffs to style/format learning
3.3 Placement
- Inline buttons get 5 to 10x higher engagement than "go to dashboard" approaches
- Immediately confirm "Thanks" after feedback
- Be transparent about how personal data is used
Chapter 4 · Streaming, Latency, and Progress UX
4.1 Streaming vs batch
- Streaming: feels fast, long responses stay engaging
- Batch: complex formats (tables, JSON) or when the output must be validated as a whole
4.2 Handling latency
- Skeleton UI: layout first
- Typing indicator: "thinking..." or three dots
- Progressive content: title, then summary, then body
- Budget display: "about 5 seconds remaining"
- Cancel and wait: user can abort
4.3 Progress state
- Agent tasks: stages like "searching", "analyzing", "drafting summary"
- Todo checklist: Claude Code style. Check off each completed step
- Replay: review the process after completion
4.4 Preventing boredom
- For long waits, use micro-interactions (animations, playful messages)
- But excessive humor undermines professionalism
Chapter 5 · Agent UX
5.1 The three challenges
- Progress visibility: what is it doing right now
- Approval and abort: stop before risky actions
- Recovery and replay: salvage even on failure
5.2 Progress visibility
- Timeline: tool calls and results on a time axis
- Todo: "1. Open file done 2. Editing... 3. Run tests pending"
- Cost/Time meter: projected cost and time
5.3 Approval gates
- Sensitive actions (payments, deletion, external sending) trigger a human-confirm popup
- Shortcut: "auto-approve next time" option (explicit)
- Prevent abuse of bulk approvals
5.4 Failure UX
- Clear error reason + retry button
- If the agent gives up on its own, present "reason and next alternative"
- User can jump in via "need a hand?"
5.5 Replay and sharing
- Share agent runs via link (internally)
- Simulation and debugging
- Onboarding material for new teammates
Chapter 6 · Voice UX (Extending Ep 9)
6.1 Design principles
- Short, clear responses (summary of the Ep 9 12 principles)
- Respect turn-taking (handle interruptions naturally)
- Speak uncertainty ("what I understood is...")
- Escalation trigger (after several failures, hand to a human)
6.2 Multimodal integration
- Voice + screen: speak guidance, show cards and links on screen
- Apple CarPlay / Android Auto integration
- Smart home / wearables: no display means pure voice UX
6.3 Accessibility
- Live captions and text alternatives by default
- Speed and size controls
- Adapt to quiet environments (auto-adjust TTS volume)
Chapter 7 · Data and Learning UX
7.1 Disclosing user data usage
- State "where my data goes" in plain language
- Whether it will be used for training, how long it is stored, how to delete it
- Short privacy summary + detailed link
7.2 Feedback to learning loop
- Set expectations about when user edits will be reflected in training
- Distinguish personal model vs shared model
7.3 On/Off switches
- "Do not use this conversation for training" toggle
- Enterprise setting: organization-wide training opt-out
Chapter 8 · Onboarding and First Impression
8.1 The first 30 seconds
- 3 to 5 examples of "what you can do with this"
- Clear data permissions and auth (OAuth, etc.)
- "Sample question" buttons: one tap should work
8.2 Progressive disclosure
- Do not expose every feature at once; teach along the way
- New-feature tooltips, history-based recommendations
8.3 Error experience
- The first failure is fatal — prepare recovery UX for the most common failures during onboarding
- User education: explain "why it answered this way" interactively
Chapter 9 · Accessibility and Inclusion
9.1 Diverse abilities
- Vision: screen readers, high contrast, font size
- Hearing: captions, text alternatives, vibration
- Motor: keyboard-only, voice-only
- Cognitive: simple language, short responses, checklists
9.2 Language and culture
- Multilingual support (UI + responses)
- Local idioms, number formats, dates, currency
- Cultural sensitivity (holidays, gender, greetings)
9.3 Economic access
- Free / low-cost tiers
- Offline mode (local LLM)
- Low-spec device support
Chapter 10 · Ethics and Responsibility
10.1 Transparency
- State clearly that AI authored the content
- Mark the boundary between human and AI (especially in conversation)
10.2 Bias and fairness
- Detect and correct outcomes unfavorable to specific groups
- Test-set diversity (gender, age, region, occupation)
10.3 Labor and economy
- Social responsibility toward jobs displaced by AI
- Labor conditions for data labelers and evaluators
10.4 Environment
- Energy consumption of large models and long contexts
- Reduce via caching, routing, and smaller models
Chapter 11 · Korean Language and Culture Specialization
11.1 Tone and honorifics
- Default to honorific form; allow soft mixing with informal depending on context
- Consistency between the more formal style and the slightly softer polite style
11.2 Forms of address
- "Customer", "User", "Mr./Ms. OO" — each brand has its own rules
- Allow real-name and nickname settings
11.3 Cultural context
- Holidays (Lunar New Year, Chuseok), public holidays, seasonal greetings
- Sensitivity around age, school-year, and career questions
- Caution when referencing occupation or hometown
11.4 Legal phrasing
- Terms, privacy, marketing consent: clear and separated
- Refusal phrasing for financial, medical, and legal advice
11.5 Korean mobile UX
- Mixing Korean keyboard and voice input
- Short-answer vs long-answer toggle
- Balanced use of emoji and memes
Chapter 12 · Five Real-World Cases
12.1 AI coding products (Cursor-type)
- Inline suggestions + chat + Todo list
- One-key accept/reject of edits
- Replay: full diff of what the agent did
12.2 Customer support (e.g. Zendesk AI)
- Customer message and AI draft and agent edits
- Show citations and knowledge-base links
- "Auto-send" vs "edit-then-send" toggle
12.3 Writing (e.g. Grammarly, Notion AI)
- Highlight-based edit suggestions
- Style/tone sliders
- "View original" / revert
12.4 Search and research (Perplexity-type)
- Emphasize citations and sources
- Auto-suggest follow-up questions
- Collections and notes by topic
12.5 Agents (Manus, Devin-type)
- Task board + Replay + external integrations
- Approval on risky steps
- Download/share deliverables
Chapter 13 · Ten Anti-Patterns
13.1 A lone prompt box
No onboarding, no samples, no suggestions. Blank-page terror.
13.2 No progress indicator
Users bail out of long agent runs.
13.3 Responses with no citations
Trust collapses.
13.4 Full confidence on hallucinated answers
False confidence is the worst UX.
13.5 Feedback collected, no sign of being used
Users give up quickly.
13.6 Swallowing errors silently
Unclear what failed and what to try next.
13.7 Automatic actions with no approval gate
A source of accidents.
13.8 Accessibility deprioritized
Always postponed, never shipped.
13.9 Missing AI disclosure
Regulatory violation + deceiving users.
13.10 Global tone ignoring Korean culture
Awkward in honorifics, address, and context.
Chapter 14 · Checklist — Twelve Items Before Launching an AI Product
- Generative/deterministic UI boundary stated
- Citation, uncertainty, editable UX
- Streaming + progress indicators
- Todo, approval gates, Replay for agent tasks
- Feedback (up/down) + reason options + reflection path
- Failure/error recovery UX
- 30-second onboarding with samples, permissions, expectations
- Accessibility (vision/hearing/motor/cognitive)
- Data-usage transparency + on/off switches
- AI disclosure (legal, ethical)
- Korean-language and cultural fit (honorifics, address, legal phrasing)
- Bias and diversity testing
Chapter 15 · Next — Season 4 Ep 13 (Finale): "Business Models in the Generative AI Era"
Technology, operations, and design are all covered. The final question is "how do you make money with this?"
- Pricing models (Subscription / Usage / Hybrid / Seat / Outcome)
- Cost structure and margins (API dependent vs in-house model)
- GTM (B2C / B2B / Enterprise / Prosumer)
- Building a data flywheel and moat
- Getting past the "AI wrapper" critique
- Regulatory and reputational risk as business impact
- Investment landscape (VC, corporate, public)
- M&A and integration
- Korean startups going global
- Season 4 retrospective
- Season 5 preview (e.g. "Reinventing Data Engineering")
"No matter how strong the tech, without a business model it is a one-year product." The final Season 4 essay stitches tech, ops, and design together on the axis of money.
See you in the next one.
Summary: AI product design is engineering trust within constraints. The boundary between deterministic and generative UI, the 7-trust patterns of citation/uncertainty/edit, streaming/feedback/failure UX, the agent's progress/approval/replay, voice turn-taking, data transparency, 30-second onboarding, accessibility, ethics, and Korean-language/cultural fit. "A good AI product is a product that converses with users within constraints." The model is only the start; UX finishes the quality of the product.