Peter Sweeney
1 min readJun 20, 2018

--

Explainability and interpretability is a difference that matters

I understand why explainability and interpretability have been used interchangeably here. However, it’s important to note that there’s a profound difference between the activities of interpreting a model and explaining why a system behaves the way it does.

This is particularly important as we move beyond systems that merely predict events to systems capable of supporting interventions. For example, the application areas that you reference, such as medicine, justice, and financial markets, are not content with descriptions. They need to reach beyond the boundaries of the predictive model. Moreover, with local explainability, the “explanations” don’t even reach that far!

This isn’t a knock on interpretability or the great work being done in this area. It’s merely a clarification of terms. While data science improvements “have always been driven by the search of the model with the best performance”, science more generally is the pursuit of explanations. And these technical interpretations of models of a far cry from scientific explanations.

For a full look at these differences, see this comparison between prediction and explanations. Thanks for your post.

--

--

Peter Sweeney
Peter Sweeney

Written by Peter Sweeney

Entrepreneur and inventor | 4 startups, 80+ patents | Writes on the science and philosophy of problem solving. Peter@ExplainableStartup.com | @petersweeney

Responses (1)