14:30 - "Causal interpretability for human-centered data science" - Joshua Loftus
Joshua Loftus (London School of Economics (LSE))
Causal interpretability for human-centered data science
Abstract:
Tools for interpretable machine learning or explainable artificial intelligence can be used to audit algorithms for fairness or other desired properties. In a "black-box" setting--one without access to the algorithm's internal structure--an auditor can only use model-agnostic methods based on varying inputs while observing differences in outputs. These include popular interpretability tools like Shapley values and Partial Dependence Plots. But such methods have important limitations that can impact audits with consequences for outcomes such as fairness. In high-stakes applications, it may be worth the effort to use tools that can incorporate background information and be tailored for specific use-cases. We introduce promising ways to do this using the mathematics of causality, with Causal Dependence Plots serving as an example.