Highlights

ESTÉE LAUDER COMPANIES

Accessibility-First AI Experience

Designed a voice-first mobile experience for users across the vision-impairment spectrum, from low vision to fully blind.

Trust under Uncertainty

Built conversational patterns and recovery flows that kept users confident even when computer
vision was imperfect or ambiguous.

Solving Constraints with UX

Turned 2–3 second processing delays into natural, human pacing, so the experience felt responsive without pretending to be instant.

Solving Constraints with UX

Turned 2–3 second processing delays into natural, human pacing, so the experience felt responsive without pretending to be instant.

Background

The Goal
Explore how emerging AI technologies could make beauty more accessible to 290 million people globally living with vision impairment.

The Audience
Users across the full vision-impairment spectrum, from mildly to fully blind, each relying trusting feedback from others or non-visual queues.

The Product
VIME is a 0->1 pilot exploring how computer vision and voice interactions could support independent lipstick application for the target audience. Designing this experience meant prioritizing trust, authenticity, recovery, and timing without relying on UI design.

The Product
VIME is a 0->1 pilot exploring how computer vision and voice interactions could support independent lipstick application for the target audience. Designing this experience meant prioritizing trust, authenticity, recovery, and timing without relying on UI design.

Role

Senior Product Designer

0-> 1 Pilot

Accessibility-First

Voice-First UX

AI Interaction Design

End-to-end Product Design (Accessibility & AI)

Leading product design and content strategy

Overview

I led end-to-end product and experience design for VIME, covering research, interaction design, conversational UX, voice content strategy, and testing.

Unexpected Reorganization

Our senior content strategist unexpected rolled off , I took ownership of content and voice strategy to maintain continuity across the experience.

Day-to-day

I partnered closely with cross-functional teams and accessibility experts to align user needs with technical constraints and deliver a reliable voice-first mobile experience.

Key Responsibilities

  • Led accessibility-focus research with visually impaired participants

  • Designed voice and conversational interaction patterns

  • Helped to define requirements around computer vision accuracy and responsiveness

  • Created content and voice principles to guide AI responses

  • Defined success metrics that balanced user trust with technical constraints

Design

Designing an AI driven, voice-first, and accessible experience required rethinking user interactions, feedback and error handling to build trust without visual UI.

Human-in-the-Loop

AI Constraints

Building Trust

Latency & Feedback

Error Recovery

Design

Designing an AI driven, voice-first, and accessible experience required rethinking user interactions, feedback and error handling to build trust without visual UI.

Human-in-the-Loop

AI Constraints

Building Trust

Latency & Feedback

Error Recovery

Designing with Non-Visual Constraints

Creating an accessible and delightful experience without a traditional UI

VoiceOver Conflict


Challenge: VoiceOver is essential for iOS navigation, but VIME's voice assistant needed to guide users through makeup application. Running both simultaneously led to audio conflicts, meaning two voices talking over each other.

Design Decision: Rather than fighting VoiceOver or ask users to disable a trusted tool, I reduced reliance on traditional UI interactions, ensuring a cohesive experience that works without VoiceOver interfering. VIME relied almost solely on voice-in / voice-out interaction without gestures or buttons.

Latency Masking

Challenge: Through interviews, I learned that almost all users increased VoiceOver's speed to 3-5x speed. VIME needed at least 3 seconds to analyze and provide feedback. This created a feedback void where users felt like the system was lagging or broken.

Design Decision: I focused on the human experience and natural conversation to mask the delay and create a buffer.


User

Okay, I'm ready!

VIME

Sure, it would be my pleasure to take a look…

By the time that sentence finished, the AI was halfway through processing. This interaction matched the luxury boutique experience, bought time for the system to process, made the interaction feel natural and human.

Natural Lip Color vs Lipstick Color

Challenge: Certain colored lipsticks closely resembled bare lips, making it difficult for computer vision to detect lipstick boundaries.

Design Decision: A conversational fallback that invited the user back into the flow as a built-in re-do opportunity.


User

Okay, I'm ready!

VIME

I'm sorry, I don't detect lipstick. Have you applied it yet? If so, I can take another look.

This helped to build trust through transparency and allowed the AI another chance to re-analyze. Framed as a collaborative effort that brought users into the error handling loop, rather than just presenting an explicit error message.

Positioning, Lighting, and Orientation

Challenge: It's difficult for users to know if their device is properly oriented at their face. One participant said another app just kept yelling “I can’t see you!” over and over again. She was so frustrated, she threw the phone into a drawer.

Design Decision: I explored directional audio, vibration cues, environmental feedback, but interview made it clear that users preferred simple, human instructions.

VIME

I can’t quite see you…try moving your phone to the left.

“The fact that she worked with where I was and didn’t just say ‘I can’t see you’, that’s huge.”
— VIME Testing Participant

“The fact that she worked with where I was and didn’t just say ‘I can’t see you’, that’s huge.”
— VIME Testing Participant

Research

Focused on understanding vision impairment, non-visual interaction patterns, and how trust is built through transparency while stress-testing AI and computer vision in real time.

Live Pilot (TestFlight)

9 Participants

Accessibility Testing

In-Context Interviews

Understanding Users, Accessibility, and Technology

Learning how users perceive, trust, and recover from AI-driven voice interactions

Technical Constraints and Feasibility


Defined MVP capabilities for computer vision and real-time feedback.

Partnered closely with engineering to understand constraints around:

  • Detection accuracy

  • Latency

  • Spatial awareness

  • Voice response timing

Understanding Vision Impairment


Consulted with accessibility experts and aligned to WCAG and RNIB standards.
Built empathy through hands-on immersion by navigating an iPhone blindfolded using VoiceOver, developing a deeper understanding of cognitive load, pacing, and emotional reassurance in non-visual experiences.

User Interviews and Testing


I ran a live TestFlight pilot with nine participants with varying levels of vision impairment.

  • Usability tests were built around intentional failure states, not just a happy path

  • Real-time troubleshooting of AI mistakes, response latency, camera positioning, and feedback accuracy.

  • Authenticity and transparency built more trust than a seemingly perfect system

User Feedback


  • Users valued voice-first interactions without the need to press buttons or use gestrues

  • Calm, natural, and friendly voice responses increased user confidence, delight and trust

“You know how people say, "I don't know how I ever lived without it"? I think this is going to be one of those apps"


— VIME Testing Participant

Impact

Empowered Independence

VIME validated that voice-first interactions empowered users with vision impairment to confidently and accurately apply lipstick independently, often for the first time.

Trust as a Requirement

The pilot revealed that transparency, conversational pacing, and collaboration mattered more to users than flawless detection, reshaping how success was defined for AI-driven experiences.

Foundation for Future Exploration

The pilot influenced ELC's direction on accessible AI, with takeaways reinforcing that trust, transparency, and natural human-centered behaviors are essential for personal, high-stake interactions with technology.

Foundation for Future Exploration

The pilot influenced ELC's direction on accessible AI, with takeaways reinforcing that trust, transparency, and natural human-centered behaviors are essential for personal, high-stake interactions with technology.

Other Case Studies