Special Session on Interpretability in Predictive Modeling
Chairs: Henrik Linusson, Henrik Bostrom, Ulf Johansson
Black box (opaque) models do not always meet the requirements of end users in real-world applications of predictive modeling. Instead, models allowing for interpretation, or, at the very least, an understanding of the logic behind individual predictions, are required. This property can be crucial in some domains, e.g., for legal or safety purposes. It has also been argued that interpretable models increase user acceptance. Some learning algorithms are designed for producing models that can be directly interpreted, e.g., decision trees and rules, while others are providing interpretable approximations of opaque models, e.g., algorithms for rule extraction. Other approaches provide abstract information about the opaque models without actually trying to capture the underlying relationships, e.g., measuring variable importance in random forests.
This special session is dedicated to research on interpretability in predictive modeling. Topics of interest include algorithms for learning interpretable models and for extracting interpretable information from (opaque) models, empirical investigations on interpretability, methodological issues when evaluating and comparing approaches with respect to interpretability as well as theoretical frameworks related to interpretability in predictive modeling.