Adding elements of human embodiment to a conversational interface nudges users to interact with devices that they are already considering human-like in an even more social way. By integrating facial features such as eyes or ears, the user is subconsciously reminded that there is another person in the room.
How could facial features be integrated with the existing smart speaker product designs?
Apple, Amazon, Microsoft, and Google would never do this: they want their smart speakers to blend seamlessly into the background of your life. Bringing the device itself to the user's attention is anathema to the design of ubiquitous computing. However, the prototypes on the following pages show how eyes could be included in the extant designs of smart speakers.