Research Area:  Machine Learning
Self-explanatory features of intelligent agents have been emphasized in terms of helping users build mental models. While previous studies have revealed what traits an agents explanations should have, investigations on how to design such explanations from a dialogue perspective (e.g., through a conversational UI) have not been addressed. Thus, the purpose of this work is to explore the roles of explanations in recommender chatbots and to suggest design considerations for such chatbots explanations based on a better understanding of the user experience. For this study, we designed a recommender chatbot to act as a probe. It provides explanations for its recommendations, based on service customization in recommender systems that reflect how it learns and evolves. Each participant experienced the recommender chatbot for 5 days and underwent a semi-structured post-interview. We discovered three roles of explanations in user mental model development. First, each user rapidly built a mental model of the chatbot and became more tolerant of unsatisfactory recommendations. Second, users willingly gave information to the chatbot and acquired a sense of ownership towards it. Third, users reflected on their own habitual activities and used the chatbot reliably. Based on these findings, we suggest three design considerations for chatbot explanations. First , the explanations should be grounded by data from diverse channels. Second, the explanations should be logical and distinguish between personal and generic data. Third, the explanations should gradually become more complete through inferences made from comprehensive information.
Keywords:  
Recommender system
Explanations
User mental model
Author(s) Name:  Park, Sunjeong , Lim. Youn-kyung
Journal name:  
Conferrence name:  International Association of Societies of Design Research Conference
Publisher name:  IASDR
DOI:  
Volume Information:  
Paper Link:   https://iasdr2019.org/uploads/files/Proceedings/te-f-1022-Par-S.pdf