GLASSES ASSISTANT

The Google smart glasses program aimed to seamlessly integrate digital intelligence into the physical world. The product evolution spanned from early monocular display prototypes focused on core Assistant tasks and translation to advanced spatial AI concepts capable of deep real-world contextual understanding. This multi-year initiative required establishing entirely new paradigms for wearable displays, balancing the physical constraints of the hardware with the vast capabilities of Google Assistant to deliver immediate, hands-free value.

Year

05.25

Scope

Assistant for XR UX Lead

Timeline

7 years

Resource

Resource

Vision Meets Voice: Architecting Assistant for Smart Glasses

As the UX Lead driving product vision and Assistant integration, I defined how users interacted with AI through a deeply constrained wearable interface. I architected the information design to optimize the balance between audio feedback and limited visual real estate, ensuring critical answers were instantly digestible. By pioneering multimodal interaction models that combined voice queries with computer vision, I laid the groundwork for contextual AI, ultimately securing several patents and defining the future of interactive spatial experiences.

Integrating core Assistant capabilities into early prototype frames established the foundational UX paradigms that would eventually inform advanced Spatial AI concepts.

Heavily constrained monocular displays require a strategic balance of visual density and audio cues to provide immediate user value without creating cognitive overload in the real world.

This visual information design system for Assistant responses prioritizes critical data above the physical fold of the micro-screen, maximizing instant legibility.

Introducing Spatial AI transforms the wearable from a basic display into a context-aware guide, enabling interactive, unique real-world applications.

Identifying multimodal queries as the central value proposition in 2018 championed the use of computer vision to solve complex, everyday problems like ambiguous parking rules.

Establishing foundational multimodal logic required close partnership with conversation designers to combine visual context with voice inputs, seamlessly resolving ambiguous user requests.

This is one of several patents secured for inventing novel interaction paradigms and cementing foundational work in the emerging field of spatial computing.