Peter Sweeney
1 min readJun 6, 2018

--

I enjoyed how you’ve unpacked interpretability and the relationship between model complexity and accuracy. I’m a little concerned about a broader question: understanding a model vs. understanding the underlying system that the model captures. I discussed this recently as the broader conflation of explanations and predictions, including derivations of interpretable models as “explaining predictions.”

“Do you care about obtaining the best results or do you care about understanding how those results were produced?” This is expressing an instrumental view of the technology, which is ultimately limiting. Eventually, the explanations need to reach into the underlying reality to inform the next questions to ask and the next models to build.

--

--

Peter Sweeney
Peter Sweeney

Written by Peter Sweeney

Entrepreneur and inventor | 4 startups, 80+ patents | Writes on the science and philosophy of problem solving. Peter@ExplainableStartup.com | @petersweeney

No responses yet