Transfer learning to explanatory knowledge
I’m trying to align your ideas with respect to a hierarchy of scientific explanations: from theory to model, model to phenomenological law, to the actual phenomena. As you highlight, “In AI there is the temptation to get stuck in the phenomenological phase.” I don’t understand the mechanism by which transfer learning ratchets knowledge up this hierarchy.
You mentioned in your original article that transferrable generalizations may encode something deeper, “structural regularities spanning multiple domains.” It seems this makes the problem of interpretability/explainability even more important. But given these methods of interpretation seem to further generalize, it seems a far cry from the conjectural leaps between theories, models and phenomena.
I discussed this difference in this comparison of predictive and explanatory knowledge. I don’t understand how transfer learning breaks out of the shackles of predictive knowledge. It reminds me of probability-raising theories of causation.
Thanks for your help.