Peter Sweeney
1 min readMay 1, 2019

--

Trust isn’t the main reason people need interpretable models. If the goal was merely prediction, I would agree: choose the best prediction engine. But people ultimately don’t want predictions, they want to act on predictions. They want control. This is a limit on the utility of predictive models: They assume static environments when real life is dynamic.

You write, “Black-boxes and reliable testing frameworks are all we need.” But you’re omitting the most important part of the system: you! People, as universal explainers, fill in the gaps in these otherwise black box systems. They reflect on the outcomes and make decisions on actions. Inevitably, these actions impact the systems, requiring updates to the predictive models. In this sense, interpretations and explanations are intrinsic to how a well-functioning predictive system operates (black box or otherwise).

Thanks for your provocative article!

--

--

Peter Sweeney
Peter Sweeney

Written by Peter Sweeney

Entrepreneur and inventor | 4 startups, 80+ patents | Writes on the science and philosophy of problem solving. Peter@ExplainableStartup.com | @petersweeney

No responses yet