Authors
Yiwei Yang, Eser Kandogan, Yunyao Li, Prithviraj Sen, Walter S Lasecki
Publication date
2019/3/20
Conference
IUI Workshops
Description
Machine learning (ML) models are often considered “blackboxes” as their internal representations fail to align with human understanding. While recent work attempted to expose the inner workings of ML models they do not allow users to interact directly with the model. This is especially problematic in domains where labeled data is limited as such the generalizability of ML models becomes questionable. We argue that the fundamental problem of generalizibility could be addressed by making ML models explainable in abstractions and expressions that make sense to users and by allowing them to interact with the model to assess, select, and build on. By involving humans in the process this way, we argue that the cocreated models will be more generalizable as they extrapolate what ML learns from few data when expressed in higher level abstractions that humans can verify, update, and expand based on their domain expertise. In this paper, we introduce RulesLearner that expresses ML model as rules on top of semantic linguistic structures in disjunctive normal form. RulesLearner allows users to interact with the patterns learned by the ML model, eg add and remove predicates, examine precision and recall, and construct a trusted set of rules. We conducted a preliminary user study which suggests that (1) rules learned by ML are explainable and (2) co-created model is more generalizable (3) providing rules to experts improves overall productivity, with fewer people involved, with less expertise. Our findings link explainability and interactivity to generalizability, as such suggest that hybrid intelligence (human-AI) methods offer great potential.
Total citations
2019202020212022202320241691195
Scholar articles