loading...

Alexa on-the-go

North • 2018

    SUMMARY

    I worked with the Amazon team to design Alexa as a heads-up, visual, on-the-go experience for smartglasses.

    MY ROLES
    Design Lead
    Research Lead
    Product Strategy

Amazon's Alexa enabled devices have been hugely successful, selling over 100 million units globally. However, the majority of the Alexa experience exists as an audio in/audio out system, usually on an Amazon Echo device on your kitchen counter or bedside table. As design lead for Alexa on Focals, my goal was to create a visual, heads-up, smartglasses experience focusing on on-the-go user needs.

Evaluating the audio hardware

Before starting design on the Alexa experience, I decided to evaluate the speaker on Focals in order to see if the quality was sufficient for playing back Alexa's voice in different ambient sound contexts. Once I received a prototype with representative hardware, I created a test plan to evaluate audio quality in five different ambient sound scenarios that a user could encounter on a daily basis. In the study, I evaluated system UI sounds (notifications and alerts) and human voice (turn-by-turn instructions and Alexa voice output).

From the study I learned that, while system UI sounds tested well, human voice audio from turn-by-turn instructions and Alexa sounded tinny, hollow, and generally very quiet – users weren't happy with it. The results of this study were shared with the engineering team and this led to different audio tuning profiles being implemented for different types of audio. With human voice audio sounding great on Focals, I was ready to begin ideating on a design.

Adapting guidelines

On Amazon Echo devices, Alexa is typically invoked using the wake word, “Alexa”. Due to battery constraints and discretion/privacy concerns from users, we opted to go in a different direction with Focals. Focals are controlled with a ring-like joystick called the Loop which has five primary inputs: up, down, left, right, and enter. We iterated and tested several interactions and learned that the most intuitive way users expected to invoke Alexa was with a long press on the Loop.

The different attention states of Alexa on Focals.

Next, we followed guidelines provided by the team at Amazon to define how Alexa attention states (listening, active listening, thinking, and speaking) and error states should be communicated on Focals. Since their guidelines didn't take into account the unique display properties of Focals, we often had discussions in order to negotiate exceptions or alternations to their guidelines to optimize the experience on our product.

Because Focals is a primarily visual interface, it made sense that we took advantage of Alexa's visual response templates. At the time, Amazon had only three visual templates available for use in their public SDK: generic text responses, lists (used for To-Do lists and calendars), and Weather. I worked with the team at Amazon to translate requirements which were made for tablet displays to fit on our tiny 110x110px display. It was my responsibility to clarify the constraints of Focals and make a strong case for exceptions to their guidelines so that the experience would make the most sense for our product and more closely align with our tenets.

On the left: the Alexa generic text response template. On the right: the text response template on Focals.

On the left: the Alexa list template. On the right: the list template used for calendar on Focals.

On the left: the Alexa weather template. On the right: the auto-scrolling multi-stage weather template on Focals.

The final Alexa experience represents the interests of both Alexa and Focals. It was certified by Amazon in late 2018 and is currently one of the product's flagship features.

Unlocking on-the-go use cases

We saw a huge opportunity for Alexa to be complemented by the sensors and contextual capabilities that are a core part of the Focals experience. Having access to Alexa on-the-go and adding to that experience with information about the user's context can be very powerful.

One of the key areas of opportunity for Focals is augmented memory: proactively helping the user remember things and get tasks done. An early experience we implemented as a proof-of-concept was the contextual grocery list: if the user has given Focals location access, as soon as they walk into a grocery store, their Alexa grocery list appears on the screen automatically.

The contextual grocery list appears as soon as you walk into a grocery store with your Focals.

Although it may seem simple or trivial, the experience resonated strongly with users because it allowed them to feel as if they had a mini superpower. Focals users were able to discreetly and confidently navigate the grocery store and gather items they needed while other customers struggled with their loose sheets of paper or their phones. This use case has just scratched the surface of what's possible when combining the power of Alexa and Focals. We're currently working on much more and we're excited to see what other magical moments we can unlock.

Lessons learned

When working with a partner to integrate their product on your platform or vice versa, communication is the most important part of the process. While guidelines allow for a good foundation, it's important to understand that sometimes those rules don't always translate to what you're working on. It's critical to communicate your goals, product tenets, and the constraints you have in order to build an amazing experience. Ultimately, when both parties work together to design the best experience for the user, the end result can be something special.

Up next:
A better touch interface for cinema projectors Christie Digital Systems • 2015