How can we communicate with an A.I assistant more naturally?

How can we communicate with an A.I assistant more naturally?

TickTalk

My Role —

User Experience &
User Interface Design


@Samsung Research America

TickTalk is a wall clock A.I assistant. It streamlines our digital tasks through multi-level interactions and conversations that accurately reflects the way we communicate with each other. Using computer vision technology and multi-language recognition system, TickTalk is designed to interact through a variety of channels, must just voice and touch screen. As the main UX designer of the project, I was tasked to research and develop the core experience concepts and user interface of the product.

How it works

Proximity based UI

TickTalk provides different info delivery depending on the proximity of the user to the device. When the user is far, the device will deliver a simple visual UI with a detail audio response. This is called the Far View. When the user is near, the device will deliver a detailed visual UI with a short and concise audio response. This is called the Near View.

Wave To Awake

Wave To Awake can be helpful when the device is not in an ideal situation to hear your voice. For example, when the device is playing music, instead of shouting at the device, the user can simply wave to initiate listening mode.

Passive Info Feed

Passive Info Feed is triggered simply by just getting close to the device. When this feature is enabled, the user can get results for his/her most inquired information through a simple text form when they get close.

Command Suggestion

After TickTalk provides the information that was asked, other related commands will show up for the user. These suggestions are based on the current information that was inquired, and the visuals will explain the verbal command the user can speak to initiate.

Language Switch

TickTalk’s Language Switch allows the user to temporarily communicate in another language by initiating the conversation by saying “[the greeting word of that language] + TickTalk”. This temporary switch is done by listening to the awake words of the languages in the user’s Language Set. Language Set is a group of user’s preferred language for communication.

Process

Challege

Home assistants now have visual displays, but the experience is still benchmarked from mobile. The UX needs to be more diverse and natural, just like how we communicate in the real world.

The current smart display assistants have three problems in common.

It’s one dimensional.Communication is a multidimensional experience that goes beyond just audio. Despite this, the current AI home assistant only communicates through voice.

It’s a one way.It is always the user who gives a command, and the AI executes it. The experience is always one-way, which makes the experience repetitive and degenerate in value much quicker.

Its scope is limited.Only one user or a very tight number of users have the ability to go through the full experience. Most home assistants are positioned in a common area, but only few users get to utilize the full potential.

Design Direction

After few rounds of brainstorming, we decided to focus on four main directions for the project. These directions are Multidimensional, Conversational, Inclusive, and Delightful. Based on these ideas, we designed a number of features that fall under at least one of these directions.

UI Iterations

Here are some collected iterations of UI design. When designing the user interface, it was important to remember to keep it vibrant and noticeable from far away. All the AI assistant products with display out in the market are geared towards close-range usage, but for TickTalk, it had to be versatile. We also wanted to keep it friendly, because it has to be a helping hand, not a teacher.

Selected Final UIs