I found myself supporting your policy proposals while rejecting your underlying skepticism about explanations. I won’t retreat to the end of theory debate. But I do have questions about the role of explanations in the areas you’ve highlighted.
Explanations are hidden as factors of production in machine learning, under a number of guises (e.g. inductive bias, priors, architectures, etc.). Data are observations, entailed by explanations that direct the observing. Transparency in these areas seems congruent with your overall policy recommendations.
Also, where these systems are error prone, a lack of explanations (that is, poor execution in the upstream factors of production) is the likely culprit. Finding “complex, multi-variable probabilistic correlations” absent an explanatory foundation is a risky gambit.
Lastly, some of the most exciting aspects of AI are explanatory (generative) approaches. I don’t think we should force AI to be artificially stupid. But explanations are the remedy, not the impediment.