Memoro: Using Large Language Models to Realize a Concise Interface for Real-Time Memory Augmentation

Описание к видео Memoro: Using Large Language Models to Realize a Concise Interface for Real-Time Memory Augmentation

Memoro: Using Large Language Models to Realize a Concise Interface for Real-Time Memory Augmentation
Wazeer Deen Zulfikar, Samantha Chan, Pattie Maes

CHI 2024: The ACM CHI Conference on Human Factors in Computing Systems
Session: Health and AI C

People have to remember an ever-expanding volume of information. Wearables that use information capture and retrieval for memory augmentation can help but can be disruptive and cumbersome in real-world tasks, such as in social settings. To address this, we developed Memoro, a wearable audio-based memory assistant with a concise user interface. Memoro uses a large language model (LLM) to infer the user’s memory needs in a conversational context, semantically search memories, and present minimal suggestions. The assistant has two interaction modes: Query Mode for voicing queries and Queryless Mode for on-demand predictive assistance, without explicit query. Our study of (N=20) participants engaged in a real-time conversation, demonstrated that using Memoro reduced device interaction time and increased recall confidence while preserving conversational quality. We report quantitative results and discuss the preferences and experiences of users. This work contributes towards utilizing LLMs to design wearable memory augmentation systems that are minimally disruptive.

Web:: https://programs.sigchi.org/chi/2024/...

Pre-recorded video presentations for Papers at CHI 2024

Комментарии

Информация по комментариям в разработке