Explanations are the substrate of progress. Leaving them unexamined denies the opportunity to improve them, to identify errors, and make new discoveries possible.
You’re correct, I don’t subscribe to the idea that scientific knowledge is the only kind of knowledge that matters. (Popper didn’t either.) I wouldn’t deny the value of playing music or basketball simply because the skills were in part acquired through repetitive practice.
But I don’t think that explanations are closed (in the sense that I understand the term). Good explanations typically have tremendous reach. They have implications that are frequently unknown when the discovery is first made. They enable predictions and new observations that would otherwise never be found.
I’m only trying to draw attention to the superordinate stature of explanations. AI is presently fixated on prediction and it needs to aim higher.
I highly recommend David Deutsch’s The Beginning of Infinity. I think you’ll find his philosophy of explanations fascinating. He’s a tidbit on rules of thumb:
Knowledge that is both familiar and uncontroversial is background knowledge. A predictive theory whose explanatory content consists only of background knowledge is a rule of thumb. Because we usually take background knowledge for granted, rules of thumb may seem to be explanationless predictions, but that is always an illusion.
There is always an explanation, whether we know it or not, for why a rule of thumb works. Denying that some regularity in nature has an explanation is effectively the same as believing in the supernatural — saying, ‘That’s not conjuring, it’s actual magic.’ Also, there is always an explanation when a rule of thumb fails, for rules of thumb are always parochial: they hold only in a narrow range of familiar circumstances.