Authors
Aaron Springer, Steve Whittaker
Publication date
2018/11/6
Journal
arXiv preprint arXiv:1811.02163
Description
The rise of machine learning has brought closer scrutiny to intelligent systems, leading to calls for greater transparency and explainable algorithms. We explore the effects of transparency on user perceptions of a working intelligent system for emotion detection. In exploratory Study 1, we observed paradoxical effects of transparency which improves perceptions of system accuracy for some participants while reducing accuracy perceptions for others. In Study 2, we test this observation using mixed methods, showing that the apparent transparency paradox can be explained by a mismatch between participant expectations and system predictions. We qualitatively examine this process, indicating that transparency can undermine user confidence by causing users to fixate on flaws when they already have a model of system operation. In contrast transparency helps if users lack such a model. Finally, we revisit the notion of transparency and suggest design considerations for building safe and successful machine learning systems based on our insights.
Total citations
2019202020212022202311224