Authors
Brian Y Lim, Anind K Dey, Daniel Avrahami
Publication date
2009/4/4
Conference
Proceedings of the 27th international conference on Human factors in computing systems
Pages
2119-2128
Publisher
ACM
Description
Context-aware intelligent systems employ implicit inputs, and make decisions based on complex rules and machine learning models that are rarely clear to users. Such lack of system intelligibility can lead to loss of user trust, satisfaction and acceptance of these systems. However, automatically providing explanations about a system's decision process can help mitigate this problem. In this paper we present results from a controlled study with over 200 participants in which the effectiveness of different types of explanations was examined. Participants were shown examples of a system's operation along with various automatically generated explanations, and then tested on their understanding of the system. We show, for example, that explanations describing why the system behaved a certain way resulted in better understanding and stronger feelings of trust. Explanations describing why the system did not behave a …
Total citations
20092010201120122013201420152016201720182019202020212022202320248202322202320243158737085919125
Scholar articles