Research Area:  Machine Learning
The use of dialogue systems as a medium for human-machine interaction is an increasingly prevalent paradigm. A growing number of dialogue systems use conversation strategies that are learned from large datasets. There are well documented instances where interactions with these system have resulted in biased or even offensive conversations due to the data-driven training process. Here, we highlight potential ethical issues that arise in dialogue systems research, including: implicit biases in data-driven systems, the rise of adversarial examples, potential sources of privacy violations, safety concerns, special considerations for reinforcement learning systems, and reproducibility concerns. We also suggest areas stemming from these issues that deserve further investigation. Through this initial survey, we hope to spur research leading to robust, safe, and ethically sound dialogue systems.
Keywords:  
Dialogue Systems
Machine Learning
Deep Learning
Author(s) Name:  Peter Henderson , Koustuv Sinha , Nicolas Angelard-Gontier , Nan Rosemary Ke , Genevieve Fried , Ryan Lowe , Joelle Pineau
Journal name:  
Conferrence name:  AIES -18: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society
Publisher name:  ACM
DOI:  10.1145/3278721.3278777
Volume Information:  Pages 123–129
Paper Link:   https://dl.acm.org/doi/abs/10.1145/3278721.3278777